10,088 research outputs found

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Learning to Collaborate by Grouping: a Consensus-oriented Strategy for Multi-agent Reinforcement Learning

    Full text link
    Multi-agent systems require effective coordination between groups and individuals to achieve common goals. However, current multi-agent reinforcement learning (MARL) methods primarily focus on improving individual policies and do not adequately address group-level policies, which leads to weak cooperation. To address this issue, we propose a novel Consensus-oriented Strategy (CoS) that emphasizes group and individual policies simultaneously. Specifically, CoS comprises two main components: (a) the vector quantized group consensus module, which extracts discrete latent embeddings that represent the stable and discriminative group consensus, and (b) the group consensus-oriented strategy, which integrates the group policy using a hypernet and the individual policies using the group consensus, thereby promoting coordination at both the group and individual levels. Through empirical experiments on cooperative navigation tasks with both discrete and continuous spaces, as well as Google research football, we demonstrate that CoS outperforms state-of-the-art MARL algorithms and achieves better collaboration, thus providing a promising solution for achieving effective coordination in multi-agent systems

    CULTURAL HERITAGE AND RISK: LET'S <i>GIVE INTELLIGENCE</i> TO OUR TECHNOLOGIES

    Get PDF
    In the last decades, the evolution in the sector of technologies applied to cultural heritage has taken on extraordinary accelerations in terms of accuracy and reliability of the measurement and restitution, as well as in the management of acquired data. In particular, the complex of these measurement and documentation actions are of fundamental importance if interfaced with other types of heterogeneous data acquired in different ways. Nevertheless, in face of such a complex framework of know- how, increasingly advanced technologies, dedicated programs and funding, meetings and debates, the establishment of specific Institutional Bodies to manage decision-making plans and their implementation, we continue to witness the frantic chasing of emergencies after the occurrence of catastrophic events. The spectrum of risks is more and more manifesting itself in its breadth: to the evidence of the effects due to Climate Change, those deriving from hydrogeological instability, from the lack of care of the territory and coasts, the devastation of an anthropic nature due to the senseless consumption of soil, and the unsustainable pressure of an uncultured, omnivorous tourism, are added with increasing frequency. This contribution deals in a general way with the theme of technologies for the knowledge of Cultural Heritage. It intends to critically question about the dangers inherent in the "indiscriminate, unconscious and uncritical" use of technologies for the knowledge of Cultural Heritage oriented towards its preservation. The aim is to prompt a discussion within the scientific community dealing with the documentation and conservation of cultural heritage in order to promote an indispensable culture of prevention and planned conservation, in which the intelligent relationship between Man and technology must be rediscovered

    Cyber Conflict and Just War Theory

    Get PDF

    Financial and Economic Review 22.

    Get PDF

    Colour technologies for content production and distribution of broadcast content

    Get PDF
    The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model

    Artificial Intelligence, Robots, and Philosophy

    Get PDF
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and completed in spring 2020, though because of the Covid-19 pandemic and other problems, the publication was delayed until this year. I apologize to the authors and potential readers for the delay. I hope that readers will enjoy these arguments on digital technology and its relationship with philosophy. *** Contents*** Introduction : Descartes and Artificial Intelligence; Masahiro Morioka*** Isaac Asimov and the Current State of Space Science Fiction : In the Light of Space Ethics; Shin-ichiro Inaba*** Artificial Intelligence and Contemporary Philosophy : Heidegger, Jonas, and Slime Mold; Masahiro Morioka*** Implications of Automating Science : The Possibility of Artificial Creativity and the Future of Science; Makoto Kureha*** Why Autonomous Agents Should Not Be Built for War; István Zoltán Zárdai*** Wheat and Pepper : Interactions Between Technology and Humans; Minao Kukita*** Clockwork Courage : A Defense of Virtuous Robots; Shimpei Okamoto*** Reconstructing Agency from Choice; Yuko Murakami*** Gushing Prose : Will Machines Ever be Able to Translate as Badly as Humans?; Rossa Ó Muireartaigh**

    Tourism and heritage in the Chornobyl Exclusion Zone

    Get PDF
    Tourism and Heritage in the Chornobyl Exclusion Zone (CEZ) uses an ethnographic lens to explore the dissonances associated with the commodification of Chornobyl's heritage. The book considers the role of the guides as experience brokers, focusing on the synergy between tourists and guides in the performance of heritage interpretation. Banaszkiewicz proposes to perceive tour guides as important actors in the bottom-up construction of heritage discourse contributing to more inclusive and participatory approach to heritage management. Demonstrating that the CEZ has been going through a dynamic transformation into a mass tourism attraction, the book offers a critical reflection on heritagisation as a meaning-making process in which the resources of the past are interpreted, negotiated, and recognised as a valuable legacy. Applying the concepts of dissonant heritage to describe the heterogeneous character of the CEZ, the book broadens the interpretative scope of dark tourism which takes on a new dimension in the context of the war in Ukraine. Tourism and Heritage in the Chornobyl Exclusion Zone argues that post-disaster sites such as Chornobyl can teach us a great deal about the importance of preserving cultural and natural heritage for future generations. The book will be of interest to academics and students who are engaged in the study of heritage, tourism, memory, disasters and Eastern Europe

    Knowledge-based Modelling of Additive Manufacturing for Sustainability Performance Analysis and Decision Making

    Get PDF
    Additiivista valmistusta on pidetty käyttökelpoisena monimutkaisissa geometrioissa, topologisesti optimoiduissa kappaleissa ja kappaleissa joita on muuten vaikea valmistaa perinteisillä valmistusprosesseilla. Eduista huolimatta, yksi additiivisen valmistuksen vallitsevista haasteista on ollut heikko kyky tuottaa toimivia osia kilpailukykyisillä tuotantomäärillä perinteisen valmistuksen kanssa. Mallintaminen ja simulointi ovat tehokkaita työkaluja, jotka voivat auttaa lyhentämään suunnittelun, rakentamisen ja testauksen sykliä mahdollistamalla erilaisten tuotesuunnitelmien ja prosessiskenaarioiden nopean analyysin. Perinteisten ja edistyneiden valmistusteknologioiden mahdollisuudet ja rajoitukset määrittelevät kuitenkin rajat uusille tuotekehityksille. Siksi on tärkeää, että suunnittelijoilla on käytettävissään menetelmät ja työkalut, joiden avulla he voivat mallintaa ja simuloida tuotteen suorituskykyä ja siihen liittyvän valmistusprosessin suorituskykyä, toimivien korkea arvoisten tuotteiden toteuttamiseksi. Motivaation tämän väitöstutkimuksen tekemiselle on, meneillään oleva kehitystyö uudenlaisen korkean lämpötilan suprajohtavan (high temperature superconducting (HTS)) magneettikokoonpanon kehittämisessä, joka toimii kryogeenisissä lämpötiloissa. Sen monimutkaisuus edellyttää monitieteisen asiantuntemuksen lähentymistä suunnittelun ja prototyyppien valmistuksen aikana. Tutkimus hyödyntää tietopohjaista mallinnusta valmistusprosessin analysoinnin ja päätöksenteon apuna HTS-magneettien mekaanisten komponenttien suunnittelussa. Tämän lisäksi, tutkimus etsii mahdollisuuksia additiivisen valmistuksen toteutettavuuteen HTS-magneettikokoonpanon tuotannossa. Kehitetty lähestymistapa käyttää fysikaalisiin kokeisiin perustuvaa tuote-prosessi-integroitua mallinnusta tuottamaan kvantitatiivista ja laadullista tietoa, joka määrittelee prosessi-rakenne-ominaisuus-suorituskyky-vuorovaikutuksia tietyille materiaali-prosessi-yhdistelmille. Tuloksina saadut vuorovaikutukset integroidaan kaaviopohjaiseen malliin, joka voi auttaa suunnittelutilan tutkimisessa ja täten auttaa varhaisessa suunnittelu- ja valmistuspäätöksenteossa. Tätä varten testikomponentit valmistetaan käyttämällä kahta metallin additiivista valmistus prosessia: lankakaarihitsaus additiivista valmistusta (wire arc additive manufacturing) ja selektiivistä lasersulatusta (selective laser melting). Rakenteellisissa sovelluksissa yleisesti käytetyistä metalliseoksista (ruostumaton teräs, pehmeä teräs, luja niukkaseosteinen teräs, alumiini ja kupariseokset) testataan niiden mekaaniset, lämpö- ja sähköiset ominaisuudet. Lisäksi tehdään metalliseosten mikrorakenteen karakterisointi, jotta voidaan ymmärtää paremmin valmistusprosessin parametrien vaikutusta materiaalin ominaisuuksiin. Integroitu mallinnustapa yhdistää kerätyn kokeellisen tiedon, olemassa olevat analyyttiset ja empiiriset vuorovaikutus suhteet, sekä muut tietopohjaiset mallit (esim. elementtimallit, koneoppimismallit) päätöksenteon tukijärjestelmän muodossa, joka mahdollistaa optimaalisen materiaalin, valmistustekniikan, prosessiparametrien ja muitten ohjausmuuttujien valinnan, lopullisen 3d-tulosteun komponentin halutun rakenteen, ominaisuuksien ja suorituskyvyn saavuttamiseksi. Valmistuspäätöksenteko tapahtuu todennäköisyysmallin, eli Bayesin verkkomallin toteuttamisen kautta, joka on vankka, modulaarinen ja sovellettavissa muihin valmistusjärjestelmiin ja tuotesuunnitelmiin. Väitöstyössä esitetyn mallin kyky parantaa additiivisien valmistusprosessien suorituskykyä ja laatua, täten edistää kestävän tuotannon tavoitteita.Additive manufacturing (AM) has been considered viable for complex geometries, topology optimized parts, and parts that are otherwise difficult to produce using conventional manufacturing processes. Despite the advantages, one of the prevalent challenges in AM has been the poor capability of producing functional parts at production volumes that are competitive with traditional manufacturing. Modelling and simulation are powerful tools that can help shorten the design-build-test cycle by enabling rapid analysis of various product designs and process scenarios. Nevertheless, the capabilities and limitations of traditional and advanced manufacturing technologies do define the bounds for new product development. Thus, it is important that the designers have access to methods and tools that enable them to model and simulate product performance and associated manufacturing process performance to realize functional high value products. The motivation for this dissertation research stems from ongoing development of a novel high temperature superconducting (HTS) magnet assembly, which operates in cryogenic environment. Its complexity requires the convergence of multidisciplinary expertise during design and prototyping. The research applies knowledge-based modelling to aid manufacturing process analysis and decision making in the design of mechanical components of the HTS magnet. Further, it explores the feasibility of using AM in the production of the HTS magnet assembly. The developed approach uses product-process integrated modelling based on physical experiments to generate quantitative and qualitative information that define process-structure-property-performance interactions for given material-process combinations. The resulting interactions are then integrated into a graph-based model that can aid in design space exploration to assist early design and manufacturing decision-making. To do so, test components are fabricated using two metal AM processes: wire and arc additive manufacturing and selective laser melting. Metal alloys (stainless steel, mild steel, high-strength low-alloyed steel, aluminium, and copper alloys) commonly used in structural applications are tested for their mechanical-, thermal-, and electrical properties. In addition, microstructural characterization of the alloys is performed to further understand the impact of manufacturing process parameters on material properties. The integrated modelling approach combines the collected experimental data, existing analytical and empirical relationships, and other data-driven models (e.g., finite element models, machine learning models) in the form of a decision support system that enables optimal selection of material, manufacturing technology, process parameters, and other control variables for attaining desired structure, property, and performance characteristics of the final printed component. The manufacturing decision making is performed through implementation of a probabilistic model i.e., a Bayesian network model, which is robust, modular, and can be adapted for other manufacturing systems and product designs. The ability of the model to improve throughput and quality of additive manufacturing processes will boost sustainable manufacturing goals

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field
    corecore