5,038 research outputs found

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems

    Spatial adaptive settlement systems in archaeology. Modelling long-term settlement formation from spatial micro interactions

    Get PDF
    Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Knowledge Distillation and Continual Learning for Optimized Deep Neural Networks

    Get PDF
    Over the past few years, deep learning (DL) has been achieving state-of-theart performance on various human tasks such as speech generation, language translation, image segmentation, and object detection. While traditional machine learning models require hand-crafted features, deep learning algorithms can automatically extract discriminative features and learn complex knowledge from large datasets. This powerful learning ability makes deep learning models attractive to both academia and big corporations. Despite their popularity, deep learning methods still have two main limitations: large memory consumption and catastrophic knowledge forgetting. First, DL algorithms use very deep neural networks (DNNs) with many billion parameters, which have a big model size and a slow inference speed. This restricts the application of DNNs in resource-constraint devices such as mobile phones and autonomous vehicles. Second, DNNs are known to suffer from catastrophic forgetting. When incrementally learning new tasks, the model performance on old tasks significantly drops. The ability to accommodate new knowledge while retaining previously learned knowledge is called continual learning. Since the realworld environments in which the model operates are always evolving, a robust neural network needs to have this continual learning ability for adapting to new changes

    Geoarchaeological Investigations of Late Pleistocene Physical Environments and Impacts of Prehistoric Foragers on the Ecosystem in Northern Malawi and Austria

    Get PDF
    A growing body of research shows that not only did environmental changes play an important role in human evolution, but humans in turn have impacted ecosystems and landscape evolution since the Late Pleistocene. This thesis presents collaborative work on Late Pleistocene open-air sites in the Karonga District of northern Malawi, in which new aspects of forager behavior came to light through the reconstruction of physical environments. My work has helped recognize that late Middle Stone Age (MSA) activity and tool production occurred in locally more open riparian environments within evergreen gallery forest, surrounded by a regional vegetation dominated by miombo woodlands and savanna. Additionally, MSA hunter-gatherers exploited the confluence of river and wetland areas along the shores of Lake Malawi, which likely served as important corridors for the dispersal of biota. By comparing data from the archaeological investigations with lake core records, we were able to identify effects of anthropogenic burning on vegetation structures and sedimentation in the region as early as 80 thousand years ago. These findings not only proved it possible to uncover early impacts of human activity on the ecosystem, but also emphasize the importance of fire in the lives of early foragers. Publications contained within this dissertation: A. Wright, D.K., Thompson, J.C., Schilt, F.C., Cohen, A., Choi, J-H., Mercader, J., Nightingale, S., Miller, C.E., Mentzer, S.M., Walde, D., Welling, M., and Gomani-Chindebvu, E. “Approaches to Middle Stone Age landscape archaeology in tropical Africa”. Special issue Geoarchaeology of the Tropics of Journal of Archaeological Science 77:64-77. http://dx.doi.org/10.1016/j.jas.2016.01.014 B. Schilt, F.C., Verpoorte, A., Antl, W. “Micromorphology of an Upper Paleolithic cultural layer at Grub-Kranawetberg, Austria”. Journal of Archaeological Science: Reports 14:152-162. http://dx.doi.org/10.1016/j.jasrep.2017.05.041 C. Nightingale, S., Schilt, F.C., Thompson, J.C., Wright, D.K., Forman, S., Mercader, J., Moss, P., Clarke, S. Itambu, M., Gomani-Chindebvu, E., Welling, M. Late Middle Stone Age Behavior and Environments at Chaminade I (Karonga, Malawi). Journal of Paleolithic Archaeology 2-3:258-397. https://doi.org/10.1007/s41982-019-00035-3 D. Thompson, J.C.*, Wright, D.K.*, Ivory, S.J.*, Choi, J-H., Nightingale, S., Mackay, A., Schilt, F.C., Otárola-Castillo, E., Mercader, J., Forman, S.L., Pietsch, T., Cohen, A.S., Arrowsmith, J.R., Welling, M., Davis, J., Schiery, B., Kaliba, P., Malijani, O., Blome, M.W., O’Driscoll, C., Mentzer, S.M., Miller, C., Heo, S., Choi, J., Tembo, J., Mapemba, F., Simengwa, D., and Gomani-Chindebvu, E. “Early human impacts and ecosystem reorganization in southern-central Africa”. Science Advances 7(19): eabf9776. *equal contribution https://doi.org/10.1126/sciadv.abf9776 E. Schilt, F.C., Miller, C.M., Wright, D.K., Mentzer, S.M., Mercader, J., Moss, Choi, J.-H., Siljedal, G., Clarke, S., Mwambwiga, A., Thomas, K., Barbieri, A., Kaliba, P., Gomani-Chindebvu, E., Thompson, J.C. “Hunter-gatherer environments at the Late Pleistocene sites of Bruce and Mwanganda´s Village, northern Malawi”. Quaternary Science Reviews 292: 107638. https://www.sciencedirect.com/science/article/pii/S0277379122002694 [untranslated

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Power system adequacy: on two-area models and the capacity procurement decision process

    Get PDF
    In this work, we explore methodological extensions to modelling practices in power system adequacy for single-area and two-area systems. Specifically, we build on top of some of the practices currently in use in Great Britain (GB) by National Grid, framing this in the context of the current technological transition in which renewable capacity is gradually replacing a considerable share of fossil-fuel-based capacity. We explore two-area extensions of the methodology currently used in GB to quantify risk in single-area models. By doing this, we also explore the impact of shortfall-sharing policies and wind capacity on risk indices and on the value of interconnection. Furthermore, we propose a model based on the statistical theory of extreme values to characterise statistical dependence across systems in both net demand (defined as power demand minus renewable generation) and capacity surpluses/deficits (defined as power supply minus demand), looking at how statistical dependence strength influences post-interconnection risk and the capacity value of interconnection. Lastly, we analyse the risk profile of a single-area system as reliance on wind capacity grows, looking at risk beyond the standard set of risk indices, which are based on long-term averages. In doing this, we look at trends which are overlooked by the latter, yet are of considerable importance for decision-makers. Moreover, we incorporate a measure of the decision-maker's degree of risk aversion into the current capacity procurement methodology in GB, and look at the impact of this and other parameters on the amount of procured capacity. We find that shortfall-sharing policies can have a sizeable impact on the interconnector's valuation in terms of security of supply, specially for systems that are significantly smaller than their neighbours. Moreover, this valuation also depends strongly on the risk indices chosen to measure it. We also find that the smoothing effect of parametric extreme value models on tail regions can have a material effect on practical adequacy calculations for post-interconnection risks, and that assumed independence between conventional generation fleets makes capacity shortfall co-occurrences only weakly dependent (in a precisely defined sense) across areas despite much stronger statistical dependence between system net demands. Lastly, as more wind capacity is installed, we find multiple relevant changes in the (single-area) system's risk profile that are not expressed by the standard risk indices: in particular, we find a substantial increase in the frequency of severe events, extreme year-to-year variability of outturn, and a progression to a system with fewer days of potentially much larger shortfalls. Moreover, we show that a high reliance on wind introduces a substantial amount of uncertainty into the calculations due to the limited number of available historic years, which cannot account for the wide range of possible weather conditions the system could experience in the future. Lastly, we also find that the a higher reliance on wind generation also impact the capacity procurement decision process, potentially making the amount of procured capacity considerably more sensitive to parameters such as the value of lost load

    Chatbots for Modelling, Modelling of Chatbots

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202

    Proximity detection protocols for IoT devices

    Get PDF
    In recent years, we have witnessed the growth of the Internet of Things paradigm, with its increased pervasiveness in our everyday lives. The possible applications are diverse: from a smartwatch able to measure heartbeat and communicate it to the cloud, to the device that triggers an event when we approach an exhibit in a museum. Present in many of these applications is the Proximity Detection task: for instance the heartbeat could be measured only when the wearer is near to a well defined location for medical purposes or the touristic attraction must be triggered only if someone is very close to it. Indeed, the ability of an IoT device to sense the presence of other devices nearby and calculate the distance to them can be considered the cornerstone of various applications, motivating research on this fundamental topic. The energy constraints of the IoT devices are often in contrast with the needs of continuous operations to sense the environment and to achieve high accurate distance measurements from the neighbors, thus making the design of Proximity Detection protocols a challenging task
    corecore