495 research outputs found

    Near-Memory Address Translation

    Full text link
    Memory and logic integration on the same chip is becoming increasingly cost effective, creating the opportunity to offload data-intensive functionality to processing units placed inside memory chips. The introduction of memory-side processing units (MPUs) into conventional systems faces virtual memory as the first big showstopper: without efficient hardware support for address translation MPUs have highly limited applicability. Unfortunately, conventional translation mechanisms fall short of providing fast translations as contemporary memories exceed the reach of TLBs, making expensive page walks common. In this paper, we are the first to show that the historically important flexibility to map any virtual page to any page frame is unnecessary in today's servers. We find that while limiting the associativity of the virtual-to-physical mapping incurs no penalty, it can break the translate-then-fetch serialization if combined with careful data placement in the MPU's memory, allowing for translation and data fetch to proceed independently and in parallel. We propose the Distributed Inverted Page Table (DIPTA), a near-memory structure in which the smallest memory partition keeps the translation information for its data share, ensuring that the translation completes together with the data fetch. DIPTA completely eliminates the performance overhead of translation, achieving speedups of up to 3.81x and 2.13x over conventional translation using 4KB and 1GB pages respectively.Comment: 15 pages, 9 figure

    FPGA-based range-limited molecular dynamics acceleration

    Full text link
    Molecular Dynamics (MD) is a computer simulation technique that executes iteratively over discrete, infinitesimal time intervals. It has been a widely utilized application in the fields of material sciences and computer-aided drug design for many years, serving as a crucial benchmark in high-performance computing (HPC). Numerous MD packages have been developed and effectively accelerated using GPUs. However, as the limits of Moore's Law are reached, the performance of an individual computing node has reached its bottleneck, while the performance of multiple nodes is primarily hindered by scalability issues, particularly when dealing with small datasets. In this thesis, the acceleration with respect to small datasets is the main focus. With the recent COVID-19 pandemic, drug discovery has gained significant attention, and Molecular Dynamics (MD) has emerged as a crucial tool in this process. Particularly, in the critical domain of drug discovery, small simulations involving approximately ~50K particles are frequently employed. However, it is important to note that small simulations do not necessarily translate to faster results, as long-term simulations comprising billions of MD iterations and more are essential in this context. In addition to dataset size, the problem of interest is further constrained. Referred to as the most computationally demanding aspect of MD, the evaluation of range-limited (RL) forces not only accounts for 90% of the MD computation workload but also involves irregular mapping patterns of 3-D data onto 2-D processor networks. To emphasize, this thesis centers around the acceleration of RL MD specifically for small datasets. In order to address the single-node bottleneck and multi-node scaling challenges, the thesis is organized into two progressive stages of investigation. The first stage delves extensively into enhancing single-node efficiency by examining various factors such as workload mapping from 3-D to 2-D, data routing, and data locality. The second stage focuses on studying multi-node scalability, with a particular emphasis on strong scaling, bandwidth demands, and the synchronization mechanisms between nodes. Through our study, the results show our design on a Xilinx U280 FPGA achieves 51.72x and 4.17x speedups with respect to an Intel Xeon Gold 6226R CPU, and a Quadro RTX 8000 GPU. Our research towards strong scaling also demonstrates that 8 Xilinx U280 FPGAs connected to a switch achieves 4.67x speedup compared to an Nvidia V100 GP

    Mitmekesiste bioloogiliste andmete ĂŒhendamine ja analĂŒĂŒs

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneTĂ€nu tehnoloogiate arengule on bioloogiliste andmete maht viimastel aastatel mitmekordistunud. Need andmed katavad erinevaid bioloogia valdkondi. Piirdudes vaid ĂŒhe andmestikuga saab bioloogilisi protsesse vĂ”i haigusi uurida vaid ĂŒhest aspektist korraga. SeetĂ”ttu on tekkinud ĂŒha suurem vajadus masinĂ”ppe meetodite jĂ€rele, mis aitavad kombineerida eri valdkondade andmeid, et uurida bioloogilisi protsesse tervikuna. Lisaks on nĂ”udlus usaldusvÀÀrsete haigusspetsiifiliste andmestike kogude jĂ€rele, mis vĂ”imaldaks vastavaid analĂŒĂŒse efektiivsemalt lĂ€bi viia. KĂ€esolev vĂ€itekiri kirjeldab, kuidas rakendada masinĂ”ppel pĂ”hinevaid integratsiooni meetodeid erinevate bioloogiliste kĂŒsimuste uurimiseks. Me nĂ€itame kuidas integreeritud andmetel pĂ”hinev analĂŒĂŒs vĂ”imaldab paremini aru saada bioloogilistes protsessidest kolmes valdkonnas: Alzheimeri tĂ”bi, toksikoloogia ja immunoloogia. Alzheimeri tĂ”bi on vanusega seotud neurodegeneratiivne haigus millel puudub efektiivne ravi. VĂ€itekirjas nĂ€itame, kuidas integreerida erinevaid Alzheimeri tĂ”ve spetsiifilisi andmestikke, et moodustada heterogeenne graafil pĂ”hinev Alzheimeri spetsiifiline andmestik HENA. SeejĂ€rel demonstreerime sĂŒvaĂ”ppe meetodi, graafi konvolutsioonilise tehisnĂ€rvivĂ”rgu, rakendamist HENA-le, et leida potentsiaalseid haigusega seotuid geene. Teiseks uurisime kroonilist immuunpĂ”letikulist haigust psoriaasi. Selleks kombineerisime patsientide verest ja nahast pĂ€rinevad laboratoorsed mÔÔtmised kliinilise infoga ning integreerisime vastavad analĂŒĂŒside tulemused tuginedes valdkonnaspetsiifilistel teadmistel. Töö viimane osa keskendub toksilisuse testimise strateegiate edasiarendusele. Toksilisuse testimine on protsess, mille kĂ€igus hinnatakse, kas uuritavatel kemikaalidel esineb organismile kahjulikke toimeid. See on vajalik nĂ€iteks ravimite ohutuse hindamisel. Töös me tuvastasime sarnase toimemehhanismiga toksiliste ĂŒhendite rĂŒhmad. Lisaks arendasime klassifikatsiooni mudeli, mis vĂ”imaldab hinnata uute ĂŒhendite toksilisust.A fast advance in biotechnological innovation and decreasing production costs led to explosion of experimental data being produced in laboratories around the world. Individual experiments allow to understand biological processes, e.g. diseases, from different angles. However, in order to get a systematic view on disease it is necessary to combine these heterogeneous data. The large amounts of diverse data requires building machine learning models that can help, e.g. to identify which genes are related to disease. Additionally, there is a need to compose reliable integrated data sets that researchers could effectively work with. In this thesis we demonstrate how to combine and analyze different types of biological data in the example of three biological domains: Alzheimer’s disease, immunology, and toxicology. More specifically, we combine data sets related to Alzheimer’s disease into a novel heterogeneous network-based data set for Alzheimer’s disease (HENA). We then apply graph convolutional networks, state-of-the-art deep learning methods, to node classification task in HENA to find genes that are potentially associated with the disease. Combining patient’s data related to immune disease helps to uncover its pathological mechanisms and to find better treatments in the future. We analyse laboratory data from patients’ skin and blood samples by combining them with clinical information. Subsequently, we bring together the results of individual analyses using available domain knowledge to form a more systematic view on the disease pathogenesis. Toxicity testing is the process of defining harmful effects of the substances for the living organisms. One of its applications is safety assessment of drugs or other chemicals for a human organism. In this work we identify groups of toxicants that have similar mechanism of actions. Additionally, we develop a classification model that allows to assess toxic actions of unknown compounds.https://www.ester.ee/record=b523255

    Energy Aware Runtime Systems for Elastic Stream Processing Platforms

    Get PDF
    Following an invariant growth in the required computational performance of processors, the multicore revolution started around 20 years ago. This revolution was mainly an answer to power dissipation constraints restricting the increase of clock frequency in single-core processors. The multicore revolution not only brought in the challenge of parallel programming, i.e. being able to develop software exploiting the entire capabilities of manycore architectures, but also the challenge of programming heterogeneous platforms. The question of “on which processing element to map a specific computational unit?”, is well known in the embedded community. With the introduction of general-purpose graphics processing units (GPGPUs), digital signal processors (DSPs) along with many-core processors on different system-on-chip platforms, heterogeneous parallel platforms are nowadays widespread over several domains, from consumer devices to media processing platforms for telecom operators. Finding mapping together with a suitable hardware architecture is a process called design-space exploration. This process is very challenging in heterogeneous many-core architectures, which promise to offer benefits in terms of energy efficiency. The main problem is the exponential explosion of space exploration. With the recent trend of increasing levels of heterogeneity in the chip, selecting the parameters to take into account when mapping software to hardware is still an open research topic in the embedded area. For example, the current Linux scheduler has poor performance when mapping tasks to computing elements available in hardware. The only metric considered is CPU workload, which as was shown in recent work does not match true performance demands from the applications. Doing so may produce an incorrect allocation of resources, resulting in a waste of energy. The origin of this research work comes from the observation that these approaches do not provide full support for the dynamic behavior of stream processing applications, especially if these behaviors are established only at runtime. This research will contribute to the general goal of developing energy-efficient solutions to design streaming applications on heterogeneous and parallel hardware platforms. Streaming applications are nowadays widely spread in the software domain. Their distinctive characiteristic is the retrieving of multiple streams of data and the need to process them in real time. The proposed work will develop new approaches to address the challenging problem of efficient runtime coordination of dynamic applications, focusing on energy and performance management.Efter en oförĂ€nderlig tillvĂ€xt i prestandakrav hos processorer, började den flerkĂ€rniga processor-revolutionen för ungefĂ€r 20 Ă„r sedan. Denna revolution skedde till största del som en lösning till begrĂ€nsningar i energieffekten allt eftersom klockfrekvensen kontinuerligt höjdes i en-kĂ€rniga processorer. Den flerkĂ€rniga processor-revolutionen medförde inte enbart utmaningen gĂ€llande parallellprogrammering, m.a.o. förmĂ„gan att utveckla mjukvara som anvĂ€nder sig av alla delelement i de flerkĂ€rniga processorerna, men ocksĂ„ utmaningen med programmering av heterogena plattformar. FrĂ„gestĂ€llningen ”pĂ„ vilken processorelement skall en viss berĂ€kning utföras?” Ă€r vĂ€l kĂ€nt inom ramen för inbyggda datorsystem. Efter introduktionen av grafikprocessorer för allmĂ€nna berĂ€kningar (GPGPU), signalprocesserings-processorer (DSP) samt flerkĂ€rniga processorer pĂ„ olika system-on-chip plattformar, Ă€r heterogena parallella plattformar idag omfattande inom mĂ„nga domĂ€ner, frĂ„n konsumtionsartiklar till mediaprocesseringsplattformar för telekommunikationsoperatörer. Processen att placera berĂ€kningarna pĂ„ en passande hĂ„rdvaruplattform kallas för utforskning av en designrymd (design-space exploration). Denna process Ă€r mycket utmanande för heterogena flerkĂ€rniga arkitekturer, och kan medföra fördelar nĂ€r det gĂ€ller energieffektivitet. Det största problemet Ă€r att de olika valmöjligheterna i designrymden kan vĂ€xa exponentiellt. Enligt den nuvarande trenden som förespĂ„r ökad heterogeniska aspekter i processorerna Ă€r utmaningen att hitta den mest passande placeringen av berĂ€kningarna pĂ„ hĂ„rdvaran Ă€nnu en forskningsfrĂ„ga inom ramen för inbyggda datorsystem. Till exempel, den nuvarande schemalĂ€ggaren i Linux operativsystemet Ă€r inkapabel att hitta en effektiv placering av berĂ€kningarna pĂ„ den underliggande hĂ„rdvaran. Det enda mĂ€tsĂ€ttet som anvĂ€nds Ă€r processorns belastning vilket, som visats i tidigare forskning, inte motsvarar den verkliga prestandan i applikationen. AnvĂ€ndning av detta mĂ€tsĂ€tt vid resursallokering resulterar i slöseri med energi. Denna forskning hĂ€rstammar frĂ„n observationerna att dessa tillvĂ€gagĂ„ngssĂ€tt inte stöder det dynamiska beteendet hos ström-processeringsapplikationer (stream processing applications), speciellt om beteendena bara etableras vid körtid. Denna forskning kontribuerar till det allmĂ€nna mĂ„let att utveckla energieffektiva lösningar för ström-applikationer (streaming applications) pĂ„ heterogena flerkĂ€rniga hĂ„rdvaruplattformar. Ström-applikationer Ă€r numera mycket vanliga i mjukvarudomĂ€n. Deras distinkta karaktĂ€r Ă€r inlĂ€sning av flertalet dataströmmar, och behov av att processera dem i realtid. Arbetet i denna forskning understöder utvecklingen av nya sĂ€tt för att lösa det utmanade problemet att effektivt koordinera dynamiska applikationer i realtid och fokus pĂ„ energi- och prestandahantering

    Network-Compute Co-Design for Distributed In-Memory Computing

    Get PDF
    The booming popularity of online services is rapidly raising the demands for modern datacenters. In order to cope with data deluge, growing user bases, and tight quality of service constraints, service providers deploy massive datacenters with tens to hundreds of thousands of servers, keeping petabytes of latency-critical data memory resident. Such data distribution and the multi-tiered nature of the software used by feature-rich services results in frequent inter-server communication and remote memory access over the network. Hence, networking takes center stage in datacenters. In response to growing internal datacenter network traffic, networking technology is rapidly evolving. Lean user-level protocols, like RDMA, and high-performance fabrics have started making their appearance, dramatically reducing datacenter-wide network latency and offering unprecedented per-server bandwidth. At the same time, the end of Dennard scaling is grinding processor performance improvements to a halt. The net result is a growing mismatch between the per-server network and compute capabilities: it will soon be difficult for a server processor to utilize all of its available network bandwidth. Restoring balance between network and compute capabilities requires tighter co-design of the two. The network interface (NI) is of particular interest, as it lies on the boundary of network and compute. In this thesis, we focus on the design of an NI for a lightweight RDMA-like protocol and its full integration with modern manycore server processors. The NI capabilities scale with both the increasing network bandwidth and the growing number of cores on modern server processors. Leveraging our architecture's integrated NI logic, we introduce new functionality at the network endpoints that yields performance improvements for distributed systems. Such additions include new network operations with stronger semantics tailored to common application requirements and integrated logic for balancing network load across a modern processor's multiple cores. We make the case that exposing richer, end-to-end semantics to the NI is a unique enabler for optimizations that can reduce software complexity and remove significant load from the processor, contributing towards maintaining balance between the two valuable resources of network and compute. Overall, network-compute co-design is an approach that addresses challenges associated with the emerging technological mismatch of compute and networking capabilities, yielding significant performance improvements for distributed memory systems

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF
    Primary liver cancer, consisting primarily of hepatocellular carcinoma (HCC) and cholangiocarcinoma (CCA), is a heterogeneous malignancy with a dismal prognosis, resulting in the third leading cause of cancer mortality worldwide [1, 2]. It is characterized by unique histological features, late-stage diagnosis, a highly variable mutational landscape, and high levels of heterogeneity in biology and etiology [3-5]. Treatment options are limited, with surgical intervention the main curative option, although not available for the majority of patients which are diagnosed in an advanced stage. Major contributing factors to the complexity and limited treatment options are the interactions between primary tumor cells, non-neoplastic stromal and immune cells, and the extracellular matrix (ECM). ECM dysregulation plays a prominent role in multiple facets of liver cancer, including initiation and progression [6, 7]. HCC often develops in already damaged environments containing large areas of inflammation and fibrosis, while CCA is commonly characterized by significant desmoplasia, extensive formation of connective tissue surrounding the tumor [8, 9]. Thus, to gain a better understanding of liver cancer biology, sophisticated in vitro tumor models need to incorporate comprehensively the various aspects that together dictate liver cancer progression. Therefore, the aim of this thesis is to create in vitro liver cancer models through organoid technology approaches, allowing for novel insights into liver cancer biology and, in turn, providing potential avenues for therapeutic testing. To model primary epithelial liver cancer cells, organoid technology is employed in part I. To study and characterize the role of ECM in liver cancer, decellularization of tumor tissue, adjacent liver tissue, and distant metastatic organs (i.e. lung and lymph node) is described, characterized, and combined with organoid technology to create improved tissue engineered models for liver cancer in part II of this thesis. Chapter 1 provides a brief introduction into the concepts of liver cancer, cellular heterogeneity, decellularization and organoid technology. It also explains the rationale behind the work presented in this thesis. In-depth analysis of organoid technology and contrasting it to different in vitro cell culture systems employed for liver cancer modeling is done in chapter 2. Reliable establishment of liver cancer organoids is crucial for advancing translational applications of organoids, such as personalized medicine. Therefore, as described in chapter 3, a multi-center analysis was performed on establishment of liver cancer organoids. This revealed a global establishment efficiency rate of 28.2% (19.3% for hepatocellular carcinoma organoids (HCCO) and 36% for cholangiocarcinoma organoids (CCAO)). Additionally, potential solutions and future perspectives for increasing establishment are provided. Liver cancer organoids consist of solely primary epithelial tumor cells. To engineer an in vitro tumor model with the possibility of immunotherapy testing, CCAO were combined with immune cells in chapter 4. Co-culture of CCAO with peripheral blood mononuclear cells and/or allogenic T cells revealed an effective anti-tumor immune response, with distinct interpatient heterogeneity. These cytotoxic effects were mediated by cell-cell contact and release of soluble factors, albeit indirect killing through soluble factors was only observed in one organoid line. Thus, this model provided a first step towards developing immunotherapy for CCA on an individual patient level. Personalized medicine success is dependent on an organoids ability to recapitulate patient tissue faithfully. Therefore, in chapter 5 a novel organoid system was created in which branching morphogenesis was induced in cholangiocyte and CCA organoids. Branching cholangiocyte organoids self-organized into tubular structures, with high similarity to primary cholangiocytes, based on single-cell sequencing and functionality. Similarly, branching CCAO obtain a different morphology in vitro more similar to primary tumors. Moreover, these branching CCAO have a higher correlation to the transcriptomic profile of patient-paired tumor tissue and an increased drug resistance to gemcitabine and cisplatin, the standard chemotherapy regimen for CCA patients in the clinic. As discussed, CCAO represent the epithelial compartment of CCA. Proliferation, invasion, and metastasis of epithelial tumor cells is highly influenced by the interaction with their cellular and extracellular environment. The remodeling of various properties of the extracellular matrix (ECM), including stiffness, composition, alignment, and integrity, influences tumor progression. In chapter 6 the alterations of the ECM in solid tumors and the translational impact of our increased understanding of these alterations is discussed. The success of ECM-related cancer therapy development requires an intimate understanding of the malignancy-induced changes to the ECM. This principle was applied to liver cancer in chapter 7, whereby through a integrative molecular and mechanical approach the dysregulation of liver cancer ECM was characterized. An optimized agitation-based decellularization protocol was established for primary liver cancer (HCC and CCA) and paired adjacent tissue (HCC-ADJ and CCA-ADJ). Novel malignancy-related ECM protein signatures were found, which were previously overlooked in liver cancer transcriptomic data. Additionally, the mechanical characteristics were probed, which revealed divergent macro- and micro-scale mechanical properties and a higher alignment of collagen in CCA. This study provided a better understanding of ECM alterations during liver cancer as well as a potential scaffold for culture of organoids. This was applied to CCA in chapter 8 by combining decellularized CCA tumor ECM and tumor-free liver ECM with CCAO to study cell-matrix interactions. Culture of CCAO in tumor ECM resulted in a transcriptome closely resembling in vivo patient tumor tissue, and was accompanied by an increase in chemo resistance. In tumor-free liver ECM, devoid of desmoplasia, CCAO initiated a desmoplastic reaction through increased collagen production. If desmoplasia was already present, distinct ECM proteins were produced by the organoids. These were tumor-related proteins associated with poor patient survival. To extend this method of studying cell-matrix interactions to a metastatic setting, lung and lymph node tissue was decellularized and recellularized with CCAO in chapter 9, as these are common locations of metastasis in CCA. Decellularization resulted in removal of cells while preserving ECM structure and protein composition, linked to tissue-specific functioning hallmarks. Recellularization revealed that lung and lymph node ECM induced different gene expression profiles in the organoids, related to cancer stem cell phenotype, cell-ECM integrin binding, and epithelial-to-mesenchymal transition. Furthermore, the metabolic activity of CCAO in lung and lymph node was significantly influenced by the metastatic location, the original characteristics of the patient tumor, and the donor of the target organ. The previously described in vitro tumor models utilized decellularized scaffolds with native structure. Decellularized ECM can also be used for creation of tissue-specific hydrogels through digestion and gelation procedures. These hydrogels were created from both porcine and human livers in chapter 10. The liver ECM-based hydrogels were used to initiate and culture healthy cholangiocyte organoids, which maintained cholangiocyte marker expression, thus providing an alternative for initiation of organoids in BME. Building upon this, in chapter 11 human liver ECM-based extracts were used in combination with a one-step microfluidic encapsulation method to produce size standardized CCAO. The established system can facilitate the reduction of size variability conventionally seen in organoid culture by providing uniform scaffolding. Encapsulated CCAO retained their stem cell phenotype and were amendable to drug screening, showing the feasibility of scalable production of CCAO for throughput drug screening approaches. Lastly, Chapter 12 provides a global discussion and future outlook on tumor tissue engineering strategies for liver cancer, using organoid technology and decellularization. Combining multiple aspects of liver cancer, both cellular and extracellular, with tissue engineering strategies provides advanced tumor models that can delineate fundamental mechanistic insights as well as provide a platform for drug screening approaches.<br/

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF

    Symbolic Programming of Distributed Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPSs) tightly integrate physical world phenomena and cyber aspects of computational units. The composition of physical, computational and communication systems demands different levels and types of abstraction as well as novel programming methodologies allowing for homogeneous programming, knowledge representation and exchange on heterogeneous devices. Current modeling approaches, frameworks and architectures result fairly inadequate to the task, especially when resource-constrained devices are involved. This work proposes symbolic computation as an effective solution to program resource constrained CPS devices with code maintaining strict ties to high-level specifications expressed in natural language while supporting interoperability among heterogeneous devices. Design, architectural, programming, and deployment aspects of CPSs are addressed through a single formalism unifying the specification of both cyber and physical parts of CPSs. In particular, programming patterns are modeled as sequences of words adhering to natural language syntax and semantics. Given a software under test (SUT), i.e. an input program expressed as a natural language sentence, formal specifications are used to generate oracles for sentence verification and to generate input test cases. The choice of natural language inspired programming supplies a mechanism for the development of the same software on different hardware platforms, ensuring interoperability among heterogeneous devices. Formal specifications also permit to generate stress tests in order to verify that program components behave as expected in repeated execution. In order to make high-level symbolic programs run on real hardware devices with no loss of expressivity during the translation of high-level specifications into an executable implementation, this work proposes a novel software architecture, Distributed Computing for Constrained Devices (DC4CD), as a supporting platform. The proposed architecture enables symbolic processing and distributed computing on devices with very limited energy, communication and processing capabilities that can be integrated into CPSs. In particular, DC4CD has been extensively used to test the symbolic distributed programming methodology on Wireless Sensor Networks (WSNs) that include nodes with actuation abilities. The platform offers networking abstractions for the exchange of symbolic code among peer devices and allows designers to change at runtime, even wirelessly on deployed nodes, not only the application code but also system code.Cyber-Physical Systems (CPSs) tightly integrate physical world phenomena and cyber aspects of computational units. The composition of physical, computational and communication systems demands different levels and types of abstraction as well as novel programming methodologies allowing for homogeneous programming, knowledge representation and exchange on heterogeneous devices. Current modeling approaches, frameworks and architectures result fairly inadequate to the task, especially when resource-constrained devices are involved. This work proposes symbolic computation as an effective solution to program resource constrained CPS devices with code maintaining strict ties to high-level specifications expressed in natural language while supporting interoperability among heterogeneous devices. Design, architectural, programming, and deployment aspects of CPSs are addressed through a single formalism unifying the specification of both cyber and physical parts of CPSs. In particular, programming patterns are modeled as sequences of words adhering to natural language syntax and semantics. Given a software under test (SUT), i.e. an input program expressed as a natural language sentence, formal specifications are used to generate oracles for sentence verification and to generate input test cases. The choice of natural language inspired programming supplies a mechanism for the development of the same software on different hardware platforms, ensuring interoperability among heterogeneous devices. Formal specifications also permit to generate stress tests in order to verify that program components behave as expected in repeated execution. In order to make high-level symbolic programs run on real hardware devices with no loss of expressivity during the translation of high-level specifications into an executable implementation, this work proposes a novel software architecture, Distributed Computing for Constrained Devices (DC4CD), as a supporting platform. The proposed architecture enables symbolic processing and distributed computing on devices with very limited energy, communication and processing capabilities that can be integrated into CPSs. In particular, DC4CD has been extensively used to test the symbolic distributed programming methodology on Wireless Sensor Networks (WSNs) that include nodes with actuation abilities. The platform offers networking abstractions for the exchange of symbolic code among peer devices and allows designers to change at runtime, even wirelessly on deployed nodes, not only the application code but also system code

    Integrated shared-memory and message-passing communication in the Alewife multiprocessor

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 237-246) and index.by John David Kubiatowicz.Ph.D
    • 

    corecore