16 research outputs found

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization

    Energy Efficient Routing Algorithms for Wireless Sensor Networks and Performance Evaluation of Quality of Service for IEEE 802.15.4 Networks

    Get PDF
    The popularity of Wireless Sensor Networks (WSN) have increased tremendously in recent time due to growth in Micro-Electro-Mechanical Systems (MEMS) technology. WSN has the potentiality to connect the physical world with the virtual world by forming a network of sensor nodes. Here, sensor nodes are usually battery-operated devices, and hence energy saving of sensor nodes is a major design issue. To prolong the network‘s lifetime, minimization of energy consumption should be implemented at all layers of the network protocol stack starting from the physical to the application layer including cross-layer optimization. In this thesis, clustering based routing protocols for WSNs have been discussed. In cluster-based routing, special nodes called cluster heads form a wireless backbone to the sink. Each cluster heads collects data from the sensors belonging to its cluster and forwards it to the sink. In heterogeneous networks, cluster heads have powerful energy devices in contrast to homogeneous networks where all nodes have uniform and limited resource energy. So, it is essential to avoid quick depletion of cluster heads. Hence, the cluster head role rotates, i.e., each node works as a cluster head for a limited period of time. Energy saving in these approaches can be obtained by cluster formation, cluster-head election, data aggregation at the cluster-head nodes to reduce data redundancy and thus save energy. The first part of this thesis discusses methods for clustering to improve energy efficiency of homogeneous WSN. It also proposes Bacterial Foraging Optimization (BFO) as an algorithm for cluster head selection for WSN. The simulation results show improved performance of BFO based optimization in terms of total energy dissipation and no of alive nodes of the network system over LEACH, K-Means and direct methods. IEEE 802.15.4 is the emerging next generation standard designed for low-rate wireless personal area networks (LR-WPAN). The second part of the work reported here in provides performance evaluation of quality of service parameters for WSN based on IEEE 802.15.4 star and mesh topology. The performance studies have been evaluated for varying traffic loads using MANET routing protocol in QualNet 4.5. The data packet delivery ratio, average end-to-end delay, total energy consumption, network lifetime and percentage of time in sleep mode have been used as performance metrics. Simulation results show that DSR (Dynamic Source Routing) performs better than DYMO (Dynamic MANET On-demand) and AODV (Ad–hoc On demand Distance Vector) routing protocol for varying traffic loads rates

    Thread Scheduling Mechanisms for Multiple-Context Parallel Processors

    Get PDF
    Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies

    Cyberspace Cartography: The Case of On-line Territorial Privacy

    Get PDF
    Territorial privacy, one of the central categories of privacy protection, involves setting limit boundaries on intrusion into an explicit space or locale. Initially, the Restatement (Second) of Torts, which defined the privacy tort of intrusion, as applied by courts, most notably designated two classes of excluded areas: “private” places in which the individual can expect to be free from intrusion, and “non-private” places, in which the individual does not have a recognized expectation of privacy. In the physical world, courts ultimately held almost uniformly that the tort of intrusion could not occur in a public place or in a place that may be viewed from a public place. Cyberspace, on the other hand, was not left with a public sphere nor has a balanced territorial privacy policy so far been established. Instead, based on the category of database privacy protection, only a private privacy legal rule was adopted and too widely so. One of the main explanations for this anomaly, in fact, derives from cyberspace’s unique architecture. While the physical world is subject to a default rule of a continuous public sphere that is then subject to distinct proprietary private sphere allotments; Cyberspace architecture, on the other hand, imbeds a different structure. In the latter, apart from the Internet’s “public roads” or backbone transit infrastructure, which is distinctly regulated according to telecommunications and antitrust law, the present default rule contains a mosaic of private allotments – namely, neighboring proprietary web sites. This anomaly is even more acute given that the U.S government, the FTC and theoreticians alike, thus far, have developed neither comprehensive nor supportive boundary theory that could maintain territorial privacy. All three, instead, have implicitly or explicitly only considered technocentristic boundary approaches. From a legal perspective the factual truths or scientific hypothesis underlying the existence of on-line spatiality, as discussed notably in the works of Johnson and Post, Lessig, Hunter, Lemley and others, should, instead, be only a parameter in establishing legal truth. In compliance with what is an alternative localist boundary approach, this study suggests that law, indeed, could construct a legal fiction of on-line locales, through which territorial privacy, ultimately, could be integrated into cyberspace privacy policy at large

    Proceedings, MSVSCC 2015

    Get PDF
    The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their students’ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This year’s conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this year’s conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters. Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair John ShullGraduate Student, MSVE Capstone Conference Student Chai

    The experience as a document: designing for the future of collaborative remembering in digital archives

    Get PDF
    How does it feel when we remember together on-line? Who gets to say what it is worth to be remembered? To understand how the user experience of participation is affecting the formation of collective memories in the context of online environments, first it is important to take into consideration how the notion of memory has been transformed under the influence of the digital revolution. I aim to contribute to the field of User Experience (UX) research theorizing on the felt experience of users from a memory perspective, taking into consideration aspects linked to both personal and collective memories in the context of connected environments.Harassment and hate speech in connected conversational environments are specially targeted to women and underprivileged communities, which has become a problem for digital archives of vernacular creativity (Burgess, J. E. 2007) such as YouTube, Twitter, Reddit and Wikipedia. An evaluation of the user experience of underprivileged communities in creative archives such as Wikipedia indicates the urgency for building a feminist space where women and queer folks can focus on knowledge production and learning without being harassed. The theoretical models and designs that I propose are a result of a series of prototype testing and case studies focused on cognitive tools for a mediated human memory operating inside transactive memory systems. With them, aims to imagine the means by which feminist protocols for UX design and research can assist in the building and maintenance of the archive as a safe/brave space.Working with perspectives from media theory, memory theory and gender studies and centering the user experience of participation for women, queer folks, people of colour (POC) and other vulnerable and underrepresented communities as the main focus of inquiring, my research takes an interdisciplinary approach to interrogate how online misogyny and other forms of abuse are perceived by communities placed outside the center of the hegemonic normativity, and how the user experience of online abuse is affecting the formation of collective memories in the context of online environments

    Analysing and Reducing Costs of Deep Learning Compiler Auto-tuning

    Get PDF
    Deep Learning (DL) is significantly impacting many industries, including automotive, retail and medicine, enabling autonomous driving, recommender systems and genomics modelling, amongst other applications. At the same time, demand for complex and fast DL models is continually growing. The most capable models tend to exhibit highest operational costs, primarily due to their large computational resource footprint and inefficient utilisation of computational resources employed by DL systems. In an attempt to tackle these problems, DL compilers and auto-tuners emerged, automating the traditionally manual task of DL model performance optimisation. While auto-tuning improves model inference speed, it is a costly process, which limits its wider adoption within DL deployment pipelines. The high operational costs associated with DL auto-tuning have multiple causes. During operation, DL auto-tuners explore large search spaces consisting of billions of tensor programs, to propose potential candidates that improve DL model inference latency. Subsequently, DL auto-tuners measure candidate performance in isolation on the target-device, which constitutes the majority of auto-tuning compute-time. Suboptimal candidate proposals, combined with their serial measurement in an isolated target-device lead to prolonged optimisation time and reduced resource availability, ultimately reducing cost-efficiency of the process. In this thesis, we investigate the reasons behind prolonged DL auto-tuning and quantify their impact on the optimisation costs, revealing directions for improved DL auto-tuner design. Based on these insights, we propose two complementary systems: Trimmer and DOPpler. Trimmer improves tensor program search efficacy by filtering out poorly performing candidates, and controls end-to-end auto-tuning using cost objectives, monitoring optimisation cost. Simultaneously, DOPpler breaks long-held assumptions about the serial candidate measurements by successfully parallelising them intra-device, with minimal penalty to optimisation quality. Through extensive experimental evaluation of both systems, we demonstrate that they significantly improve cost-efficiency of autotuning (up to 50.5%) across a plethora of tensor operators, DL models, auto-tuners and target-devices
    corecore