87 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Shader optimization and specialization

    Get PDF
    In the field of real-time graphics for computer games, performance has a significant effect on the player’s enjoyment and immersion. Graphics processing units (GPUs) are hardware accelerators that run small parallelized shader programs to speed up computationally expensive rendering calculations. This thesis examines optimizing shader programs and explores ways in which data patterns on both the CPU and GPU can be analyzed to automatically speed up rendering in games. Initially, the effect of traditional compiler optimizations on shader source-code was explored. Techniques such as loop unrolling or arithmetic reassociation provided speed-ups on several devices, but different GPU hardware responded differently to each set of optimizations. Analyzing execution traces from numerous popular PC games revealed that much of the data passed from CPU-based API calls to GPU-based shaders is either unused, or remains constant. A system was developed to capture this constant data and fold it into the shaders’ source-code. Re-running the game’s rendering code using these specialized shader variants resulted in performance improvements in several commercial games without impacting their visual quality

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues

    Simulation methodologies for mobile GPUs

    Get PDF
    GPUs critically rely on a complex system software stack comprising kernel- and user-space drivers and JIT compilers. Yet, existing GPU simulators typically abstract away details of the software stack and GPU instruction set. Partly, this is because GPU vendors rarely release sufficient information about their latest GPU products. However, this is also due to the lack of an integrated CPU-GPU simulation framework, which is complete and powerful enough to drive the complex GPU software environment. This has led to a situation where research on GPU architectures and compilers is largely based on outdated or greatly simplified architectures and software stacks, undermining the validity of the generated results. Making the situation even more dire, existing GPU simulation efforts are concentrated around desktop GPUs, making infrastructure for modelling mobile GPUs virtually non-existent, despite their surging importance in the GPU market. Still, mobile GPU designers are faced with the challenge of evaluating design alternatives involving hundreds of architectural configuration options and micro-architectural improvements under tight time-to-market constraints, to which currently employed design flows involving detailed, but slow simulations are not well suited. In this thesis we develop a full-system simulation environment for a mobile platform, which enables users to run a complete and unmodified software stack for a state-of-the-art mobile Arm CPU and Mali Bifrost GPU powered device, achieving 100\% architectural accuracy across all available toolchains. We demonstrate the capability of our GPU simulation framework through a number of case studies exploring modern, mobile GPU applications, and optimize them using functional simulation statistics, unavailable with other approaches or hardware. Furthermore, we develop a trace-based performance model, allowing architects to rapidly model GPU configurations in early design space exploration

    ModellgestĂŒtzter Entwurf von FeldgerĂ€teapplikationen

    Get PDF
    Die Entwicklung von FeldgerĂ€ten ist ein Ă€ußerst komplexer Vorgang, welcher auf vielen Vorrausetzungen aufsetzt, diverse Anforderungen und Randbedingungen mitbringt und bisher wenig beachtet und veröffentlicht wurde. Angesichts der fortschreitenden Digitalisierung drĂ€ngen immer mehr Anbieter auf den Automatisierungsmarkt. So sind aktuell zunehmend Technologien und AnsĂ€tze aus dem Umfeld des Internet of Things im Automatisierungsbereich zu finden. Diese AnsĂ€tze reichen von Sensoren ohne die in der Industrie ĂŒblichen Beschreibungen bis hin zu MarktplĂ€tzen, auf denen Integratoren und Anwender Softwareteile fĂŒr Anlagen kaufen können. FĂŒr die neuen Anbieter, die hĂ€ufig nicht aus dem klassischen AutomatisierungsgeschĂ€ft kommen, sind die bisher bestehenden Modelle, FunktionalitĂ€ten, Profile und Beschreibungsmittel nicht immer leicht zu verwenden. So entstehen disruptive Lösungen auf Basis neu definierter Spezifikationen und Modelle. Trotz dieser DisruptivitĂ€t sollte es das Ziel sein, die bewĂ€hrten Automatisierungsfunktionen nicht neu zu erfinden, sondern diese effektiv und effizient in AbhĂ€ngigkeit der Anforderungen auf unterschiedlichen Plattformen zu verwenden. Dies schließt ihre flexible Verteilung auf heterogene vernetzte Ressourcen explizit ein. Dabei können die Plattformen sowohl klassische FeldgerĂ€te und Steuerungen sein, als auch normale Desktop-PCs und IoT-Knoten. Ziel dieser Arbeit ist es, eine Werkzeugkette fĂŒr den modellbasierten Entwurf von FeldgerĂ€teapplikationen auf Basis von Profilen und damit fĂŒr den erweiterten Entwurf von verteilten Anlagenapplikationen zu entwickeln. Dabei mĂŒssen die verschiedenen Beschreibungsmöglichkeiten evaluiert werden, um diese mit detaillierten Parameter- und Prozessdatenbeschreibungen zu erweitern. Außerdem sollen modulare Konzepte genutzt und Vorbereitungen fĂŒr die Verwendung von Semantik im Entwurfsprozess getroffen werden. In Bezug auf den GerĂ€teengineeringprozess soll der Anteil des automatisierten GerĂ€teengineerings erweitert werden. Dies soll zu einer Flexibilisierung der GerĂ€teentwicklung fĂŒhren, in der die Verschaltung der funktionalen Elemente beim Endkunden erfolgt. Auch das Deployment von eigenen funktionalen Elementen auf die GerĂ€te der Hersteller soll durch den Endkunden möglich werden. Dabei wird auch eine automatisierte Erstellung von GerĂ€tebeschreibungen benötigt. Alle diese Erweiterungen ermöglichen dann den letzten großen Schritt zu einer verteilten Applikation ĂŒber heterogene Infrastrukturen. Dabei sind die funktionalen Elemente nicht nur durch die GerĂ€tehersteller verteilbar, sondern diese können auch auf verschiedenen Plattformen unterschiedlicher GerĂ€tehersteller verwendet werden. Damit einher geht die fĂŒr aktuelle Entwicklungen wie Industrie 4.0 benötigte gerĂ€teunabhĂ€ngige Definition von FunktionalitĂ€t. Alle im Engineering entstandenen Informationen können dabei auf den unterschiedlichen Ebenen der Automatisierungspyramide und wĂ€hrend des Lebenszyklus weiterverwendet werden. Eine Integration diverser GerĂ€tefamilien außerhalb der Automatisierungstechnik wie z. B. IoT-GerĂ€te und IT-GerĂ€te ist damit vorstellbar. Nach einer Analyse der relevanten Techniken, Technologien, Konzepte, Methoden und Spezifikationen wurde eine Werkzeugkette fĂŒr den modellgestĂŒtzten Entwurf von FeldgerĂ€ten entwickelt und die benötigten Werkzeugteile und Erweiterungen an bestehenden Beschreibungen diskutiert. Dies Konzept wurde dann auf den verteilten Entwurf auf heterogener Hardware und heterogenen Plattformen erweitert, bevor beide Konzepte prototypisch umgesetzt und evaluiert wurden. Die Evaluation erfolgt an einem zweigeteilten Szenario aus der Sicht eines GerĂ€teherstellers und eines Integrators. Die entwickelte Lösung integriert AnsĂ€tze aus dem Kontext von Industrie 4.0 und IoT. Sie trĂ€gt zu einer vereinfachten und effizienteren Automatisierung des Engineerings bei. Dabei können Profile als Baukasten fĂŒr die FunktionalitĂ€t der FeldgerĂ€te und Anlagenapplikationen verwendet werden. Bestehende BeschrĂ€nkungen im Engineering werden somit abgeschwĂ€cht, so dass eine Verteilung der FunktionalitĂ€t auf heterogene Hardware und heterogene Plattformen möglich wird und damit zur Flexibilisierung der Automatisierungssysteme beitrĂ€gt.The development of field devices is a very complex procedure. Many preconditions need to be met. Various requirements and constrains need to be addressed. Beside this, there are only a few publications on this topic. Due to the ongoing digitalization, more and more solution providers are entering the market of the industrial automation. Technologies and approaches from the context of the Internet of Things are being used more and more in the automation domain. These approaches range from sensors without the typical descriptions from industry up to marketplaces where integrators and users can buy software components for plants. For new suppliers, who often do not come from the classical automation business, the already existing models, functionalities, profiles, and descriptions are not always easy to use. This results in disruptive solutions based on newly defined specifications and models. Despite this disruptiveness, the aim should be to prevent reinventing the proven automation functions, and to use them effectively, and efficiently on different platforms depending on the requirements. This explicitly includes the flexible distribution of the automation functions to heterogeneous networked resources. The platforms can be classical field devices and controllers, as well as normal desktop PCs and IoT nodes. The aim of this thesis is to develop a toolchain for the model-based design of field device applications based on profiles, and thus also suitable for the extended design of distributed plant applications. Therefore, different description methods are evaluated in order to enrich them with detailed descriptions of parameters and process data. Furthermore, c oncepts of modularity will also be used and preparations will be made for the use of semantics in the design process. With regard to the device engineering process, the share of automated device engineering will be increased. This leads to a flexibilisation of the device development, allowing the customer to perform the networking of the functional elements by himself. The customer should also be able to deploy his own functional elements to the manufacturers' devices. This requires an automated creation of device descriptions. Finally, all these extensions will enable a major step towards using a distributed application over heterogeneous infrastructures. Thus, the functional elements can not only be distributed by equipment manufacturers, but also be distributed on different platforms of different equipment manufacturers. This is accompanied by the device-independent definition of functionality required for current developments such as Industry 4.0. All information created during engineering can be used at different levels of the automation pyramid and throughout the life cycle. An integration of various device families from outside of Automation Technology, such as IoT devices and IT devices, is thus conceivable. After an analysis of the relevant techniques, technologies, concepts, methods, and specifications a toolchain for the model-based design of field devices was developed and the required tool parts, and extensions to existing descriptions were discussed. This concept was then extended to the distributed design on heterogeneous hardware and heterogeneous platforms. Finally, both concepts were prototypically implemented and evaluated. The evaluation is based on a two-part scenario from both the perspective of a device manufacturer, and the one of an integrator. The developed solution integrates approaches from the context of Industry 4.0 and IoT. It contributes to a simplified, and more efficient automation of engineering. Within this context, profiles can be used as building blocks for the functionality of field devices, and plant applications. Existing limitations in engineering are thus reduced, so that a distribution of functionality across heterogeneous hardware and heterogeneous platforms becomes possible and contributing to the flexibility of automation systems

    Robust Low-Overhead Binary Rewriting: Design, Extensibility, And Customizability

    Get PDF
    Binary rewriting is the foundation of a wide range of binary analysis tools and techniques, including securing untrusted code, enforcing control-flow integrity, dynamic optimization, profiling, race detection, and taint tracking to prevent data leaks. There are two equally important and necessary criteria that a binary rewriter must have: it must be robust and incur low overhead. First, a binary rewriter must work for different binaries, including those produced by commercial compilers from a wide variety of languages, and possibly modified by obfuscation tools. Second, the binary rewriter must be low overhead. Although the off-line use of programs, such as testing and profiling, can tolerate large overheads, the use of binary rewriters in deployed programs must not introduce significant overheads; typically, it should not be more than a few percent. Existing binary rewriters have their challenges: static rewriters do not reliably work for stripped binaries (i.e., those without relocation information), and dynamic rewriters suffer from high base overhead. Because of this high overhead, existing dynamic rewriters are limited to off-line testing and cannot be practically used in deployment. In the first part, we have designed and implemented a dynamic binary rewriter called RL-Bin, a robust binary rewriter that can instrument binaries reliably with very low overhead. Unlike existing static rewriters, RL-Bin works for all benign binaries, including stripped binaries that do not contain relocation information. In addition, RL-Bin does not suffer from high overhead because its design is not based on the code-cache, which is the primary mechanism for other dynamic rewriters such as Pin, DynamoRIO, and Dyninst. RL-Bin's design and optimization methods have empowered RL-Bin to rewrite binaries with very low overhead (1.04x on average for SPECrate 2017) and very low memory overhead (1.69x for SPECrate 2017). In comparison, existing dynamic rewriters have a high runtime overhead (1.16x for DynamoRIO, 1.29x for Pin, and 1.20x for Dyninst) and have a bigger memory footprint (2.5x for DynamoRIO, 2.73x for Pin, and 2.3x for Dyninst). RL-Bin differentiates itself from other rewriters by having negligible overhead, which is proportional to the added instrumentation. This low overhead is achieved by utilizing an in-place design and applying multiple novel optimization methods. As a result, lightweight instrumentation can be added to applications deployed in live systems for monitoring and analysis purposes. In the second part, we present RL-Bin++, an improved version of RL-Bin, that handles various problematic real-world features commonly found in obfuscated binaries. We demonstrate the effectiveness of RL-Bin++ for the SPECrate 2017 benchmark obfuscated with UPX, PECompact, and ASProtect obfuscation tools. RL-Bin++ can efficiently instrument heavily obfuscated binaries (overhead averaging 2.76x, compared to 4.11x, 4.72x, and 5.31x overhead respectively caused by DynamoRIO, Dyninst, and Pin). However, the major accomplishment is that we achieved this while maintaining the low overhead of RL-Bin for unobfuscated binaries (only 1.04x). The extra level of robustness is achieved by employing dynamic deobfuscation techniques and using a novel hybrid in-place and code-cache design. Finally, to show the efficacy of RL-Bin in the development of sophisticated and efficient analysis tools, we have designed, implemented, and tested two novel applications of RL-Bin; An application-level file access permission system and a security tool for enforcing secure execution of applications. Using RL-Bin's system call instrumentation capability, we developed a fine-grained file access permission system that enables the user to define separate file access policies for each application. The overhead is very low, only 6%, making this tool practical to be used in live systems. Secondly, we designed a security enforcement tool that instruments indirect control transfer instructions to ensure that the program execution follows the predetermined anticipated path. Hence, it would protect the application from being hijacked. Our implementation showed effectiveness in detecting exploits in real-world programs while being practical with a low overhead of only 9%

    Smart Wireless Sensor Networks

    Get PDF
    The recent development of communication and sensor technology results in the growth of a new attractive and challenging area - wireless sensor networks (WSNs). A wireless sensor network which consists of a large number of sensor nodes is deployed in environmental fields to serve various applications. Facilitated with the ability of wireless communication and intelligent computation, these nodes become smart sensors which do not only perceive ambient physical parameters but also be able to process information, cooperate with each other and self-organize into the network. These new features assist the sensor nodes as well as the network to operate more efficiently in terms of both data acquisition and energy consumption. Special purposes of the applications require design and operation of WSNs different from conventional networks such as the internet. The network design must take into account of the objectives of specific applications. The nature of deployed environment must be considered. The limited of sensor nodesïżœ resources such as memory, computational ability, communication bandwidth and energy source are the challenges in network design. A smart wireless sensor network must be able to deal with these constraints as well as to guarantee the connectivity, coverage, reliability and security of network's operation for a maximized lifetime. This book discusses various aspects of designing such smart wireless sensor networks. Main topics includes: design methodologies, network protocols and algorithms, quality of service management, coverage optimization, time synchronization and security techniques for sensor networks

    Sky’s No Limit: Space-Based Solar Power, The Next Major Step in The Indo-US Strategic Partnership

    Get PDF
    IDSA Occassional Paper No. 9 2010 This paper provides a policymaker\u27s overview of a highly scalable, revolutionary, renewable energy technology, Space-Based Solar Power (SBSP), and evaluates it utility within the context of the Indo-US strategic partnership. After providing an overview of the concept and its significance to the compelling problems of sustainable growth, economic development, energy security and climate change, it evaluates the utility of the concept in the context of respective Indian and US political context and energy-climate trajectories. The paper concludes that a bilateral initiative to develop Space-Based Solar Power is highly consistent with the objectives of the Indo-US strategic partnership, and ultimately recommends an actionable tree-tiered programme to realize its potential
    • 

    corecore