149 research outputs found

    Hierarchical multithreading: programming model and system software

    Get PDF
    This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning, fragmented memory, and synchronous global barriers) to dramatically enhance the broad effectiveness of parallel processing for high end computing. We are developing a hierarchical threaded virtual machine (HTVM) that defines a dynamic, multithreaded execution model and programming model, providing an architecture abstraction for HEC system software and tools development. We are working on a prototype language, LITL-X (pronounced "little-X") for latency intrinsic-tolerant language, which provides the application programmers with a powerful set of semantic constructs to organize parallel computations in a way that hides/manages latency and limits the effects of overhead. This is quite different from locality management, although the intent of both strategies is to minimize the effect of latency on the efficiency of computation. We work on a dynamic compilation and runtime model to achieve efficient LITL-X program execution. Several adaptive optimizations were studied. A methodology of incorporating domain-specific knowledge in program optimization was studied. Finally, we plan to implement our method in an experimental testbed for a HEC architecture and perform a qualitative and quantitative evaluation on selected applications

    Center for Programming Models for Scalable Parallel Computing: Future Programming Models

    Full text link

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    The DNA Cloud: Is it Alive?

    Get PDF
    In this analysis, I will firstly be presenting the current knowledge concerning the materiality of the internet based Cloud, which I will henceforth be referring to as simply the Cloud. For organisation purposes I have created two umbrella categories under which I place the ongoing research in the field. Scholars have been addressing the issue of Cloud materiality through broadly two prisms: sociological materiality and geopolitical materiality. The literature of course deals with the intricacies of the Cloud based on its present ferromagnetic storage functionality. However, developments in synthetic biology have caused private tech companies and University spin-offs to flirt with the idea of a DNA-based cloud system. This prospect inevitably gives birth to unaddressed questions pertaining to the biological (nucleotides instead of magnetic disks) materiality of an upcoming cloud system of this nature, since the relevant queries bleed into fields of materiality of the human soul and body and even the materiality of knowledge and memory. This novel investigation I will be conducting concerns a speculative cloud model, the technological mechanics of which are presently in an embryonic stage, and a basic question which is if this fabricated creation could be considered potentially alive

    Summary of multi-core hardware and programming model investigations

    Get PDF
    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community

    Secure Communication in Disaster Scenarios

    Get PDF
    Während Naturkatastrophen oder terroristischer Anschläge ist die bestehende Kommunikationsinfrastruktur häufig überlastet oder fällt komplett aus. In diesen Situationen können mobile Geräte mithilfe von drahtloser ad-hoc- und unterbrechungstoleranter Vernetzung miteinander verbunden werden, um ein Notfall-Kommunikationssystem für Zivilisten und Rettungsdienste einzurichten. Falls verfügbar, kann eine Verbindung zu Cloud-Diensten im Internet eine wertvolle Hilfe im Krisen- und Katastrophenmanagement sein. Solche Kommunikationssysteme bergen jedoch ernsthafte Sicherheitsrisiken, da Angreifer versuchen könnten, vertrauliche Daten zu stehlen, gefälschte Benachrichtigungen von Notfalldiensten einzuspeisen oder Denial-of-Service (DoS) Angriffe durchzuführen. Diese Dissertation schlägt neue Ansätze zur Kommunikation in Notfallnetzen von mobilen Geräten vor, die von der Kommunikation zwischen Mobilfunkgeräten bis zu Cloud-Diensten auf Servern im Internet reichen. Durch die Nutzung dieser Ansätze werden die Sicherheit der Geräte-zu-Geräte-Kommunikation, die Sicherheit von Notfall-Apps auf mobilen Geräten und die Sicherheit von Server-Systemen für Cloud-Dienste verbessert

    Genetic Analysis of Arbuscular Mycorrhiza Development in Lotus japonicus

    Get PDF

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa

    Extracting circadian clock information from a single time point assay

    Get PDF
    A working internal circadian clock allows a healthy organism to keep time in order to anticipate transitions between night and day, allowing the temporal optimisation and control of internal processes. The internal circadian clock is regulated by a set of core genes that form a tightly coupled oscillator system. These oscillators are autonomous and robust to noise, but can be slowly reset by external signals that are processed by the master clock in the brain. In this thesis we explore the robustness of a tightly coupled oscillator model of the circadian clock, and show that its deterministic and stochastic forms are both significantly robust to noise. Using a simple linear algebra approach to rhythmicity detection, we show that a small set of circadian clock genes are rhythmic and synchronised in mouse tissues, and rhythmic and synchronised in a group of human individuals. These sets of tightly regulated, robust oscillators, are genes that we use to de ne the expected behaviour of a healthy circadian clock. We use these “time fingerprints" to design a model, dubbed “Time-Teller", that can be used to tell the time from single time point samples of mouse or human transcriptome. The dysfunction of the molecular circadian clock is implicated in several major diseases and there is significant evidence that disrupted circadian rhythm is a hallmark of many cancers. Convincing results showing the dysfunction of the circadian clock in solid tumours is lacking due to the difficulties of studying circadian rhythms in tumours within living mammals. Instead of developing biological assays to study this, we take advantage of the design of Time-Teller, using its underlying features to build a metric, ϴ, that indicates dysfunction of the circadian clock. We use Time-Teller to explore the clock function of samples from existing, publicly available tumour transcriptome data. Although multiple algorithms have been published with the aims of “time-telling" using transcriptome data, none of them have been reported to be able to tell the times of single samples, or provide metrics of clock dysfunction in single samples. Time-Teller is presented in this thesis as an algorithm that both tells the time of a single time-point sample, and provides a measure of clock function for that sample. In a case study, we use the clock function metric, , as a retrospective prognostic marker for breast cancer using data from a completed clinical trial. ϴ is shown to correlate with many prognostic markers of breast cancer, and we show how could also be a predictive marker for treatment efficacy and patient survival
    • …
    corecore