3,565 research outputs found

    Evolutionary robotics: model or design?

    Get PDF
    In this paper, I review recent work in evolutionary robotics (ER), and discuss the perspectives and future directions of the field. First, I propose to draw a crisp distinction between studies that exploit ER as a design methodology on the one hand, and studies that instead use ER as a modeling tool to better understand phenomena observed in biology. Such a distinction is not always that obvious in the literature, however. It is my conviction that ER would profit from an explicit commitment to one or the other approach. Indeed, I believe that the constraints imposed by the specific approach would guide the experimental design and the analysis of the results obtained, therefore reducing arbitrary choices and promoting the adoption of principled methods that are common practice in the target domain, be it within engineering or the life sciences. Additionally, this would improve dissemination and the impact of ER studies on other disciplines, leading to the establishment of ER as a valid tool either for design or modeling purposes

    From evolutionary computation to the evolution of things

    Get PDF
    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems

    Additive Manufacturing in the Healthcare Supply Chain

    Get PDF

    An Ethical Framework for Artificial Intelligence and Sustainable Cities

    Get PDF
    The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable development goal (SDG) 11 (on sustainable cities), and the remainder of the related SDGs. The unequal structure of the global system leads to dynamic and systemic problems, which have a more significant impact on those that are most vulnerable. Ethical frameworks based only on the individual level are no longer sufficient as they lack the necessary articulation to provide solutions to the new systemic challenges. A new ethical vision of digitalization must comprise the understanding of the scales and complex interconnections among SDGs and the ongoing socioeconomic and industrial revolutions. Many of the current social systems are internally fragile and very sensitive to external factors and threats, which lead to unethical situations. Furthermore, the multilayered net-like social tissue generates clusters of influence and leadership that prevent communities from a proper development. Digital technology has also had an impact at the individual level, posing several risks including a more homogeneous and predictable humankind. To preserve the core of humanity, we propose an ethical framework to empower individuals centered on the cities and interconnected with the socioeconomic ecosystem and the environment through the complex relationships of the SDGs. Only by combining human-centered and collectiveness-oriented digital development will it be possible to construct new social models and interactions that are ethical. Thus, it is necessary to combine ethical principles with the digital innovation undergoing in all the dimensions of sustainability

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    Unveiling structure and dynamics of global digital production technology

    Get PDF
    This research pioneers the construction of a novel Digital Production Technology Classification (DPTC) based on the latest Harmonised Commodity Description and Coding System (HS2017) of the World Customs Organisation. The DPTC enables the identification and comprehensive analysis of 127 tradable products associated with digital production technologies (DPTs). The development of this classification offers a substantial contribution to empirical research and policy analysis. It enables an extensive exploration of international trade in DPTs, such as the identification of emerging trade networks comprising final goods, intermediate components, and instrumentation technologies and the intricate regional and geopolitical dynamics related to DPTs. In this paper, we deploy our DPTC within a network analysis methodological framework to analyse countries' engagements with DPTs through bilateral and multilateral trade. By comparing the trade networks in DPTs in 2012 and 2019, we unveil dramat ic shifts in the global DPTs' network structure, different countries' roles, and their degree of centrality. Notably, our findings shed light on China's expanding role and the changing trade patterns of the USA in the digital technology realm. The analysis also brings to the fore the increasing significance of Southeast Asian countries, revealing the emergence of a regional hub within this area, characterised by dense bilateral networks in DPTs. Furthermore, our study points to the fragmented network structures in Europe and the bilateral dependencies that developed there. Being the first systematic DPTC, also deployed within a network analysis framework, we expect the classification to become an indispensable tool for researchers, policymakers, and stakeholders engaged in research on digitalisation and digital industrial policy

    Unveiling structure and dynamics of global digital production technology

    Get PDF
    This research pioneers the construction of a novel Digital Production Technology Classification (DPTC) based on the latest Harmonised Commodity Description and Coding System (HS2017) of the World Customs Organisation. The DPTC enables the identification and comprehensive analysis of 127 tradable products associated with digital production technologies (DPTs). The development of this classification offers a substantial contribution to empirical research and policy analysis. It enables an extensive exploration of international trade in DPTs, such as the identification of emerging trade networks comprising final goods, intermediate components, and instrumentation technologies and the intricate regional and geopolitical dynamics related to DPTs. In this paper, we deploy our DPTC within a network analysis methodological framework to analyse countries' engagements with DPTs through bilateral and multilateral trade. By comparing the trade networks in DPTs in 2012 and 2019, we unveil dramat ic shifts in the global DPTs' network structure, different countries' roles, and their degree of centrality. Notably, our findings shed light on China's expanding role and the changing trade patterns of the USA in the digital technology realm. The analysis also brings to the fore the increasing significance of Southeast Asian countries, revealing the emergence of a regional hub within this area, characterised by dense bilateral networks in DPTs. Furthermore, our study points to the fragmented network structures in Europe and the bilateral dependencies that developed there. Being the first systematic DPTC, also deployed within a network analysis framework, we expect the classification to become an indispensable tool for researchers, policymakers, and stakeholders engaged in research on digitalisation and digital industrial policy

    Emergence of assortative mixing between clusters of cultured neurons

    Get PDF
    The analysis of the activity of neuronal cultures is considered to be a good proxy of the functional connectivity of in vivo neuronal tissues. Thus, the functional complex network inferred from activity patterns is a promising way to unravel the interplay between structure and functionality of neuronal systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal cultures formed by interconnected aggregates of neurons (clusters). Dynamics is characterized by the fast activation of groups of clusters in sequences termed bursts. The analysis of the time delays between clusters' activations within the bursts allows the reconstruction of the directed functional connectivity of the network. We propose a method to statistically infer this connectivity and analyze the resulting properties of the associated complex networks. Surprisingly enough, in contrast to what has been reported for many biological networks, the clustered neuronal cultures present assortative mixing connectivity values, meaning that there is a preference for clusters to link to other clusters that share similar functional connectivity, as well as a rich-club core, which shapes a"connectivity backbone" in the network. These results point out that the grouping of neurons and the assortative connectivity between clusters are intrinsic survival mechanisms of the culture

    Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics

    Get PDF
    Biohybrid robotics takes an engineering approach to the expansion and exploitation of biological behaviours for application to automated tasks. Here, we identify the construction of living buildings and infrastructure as a high-potential application domain for biohybrid robotics, and review technological advances relevant to its future development. Construction, civil infrastructure maintenance and building occupancy in the last decades have comprised a major portion of economic production, energy consumption and carbon emissions. Integrating biological organisms into automated construction tasks and permanent building components therefore has high potential for impact. Live materials can provide several advantages over standard synthetic construction materials, including self-repair of damage, increase rather than degradation of structural performance over time, resilience to corrosive environments, support of biodiversity, and mitigation of urban heat islands. Here, we review relevant technologies, which are currently disparate. They span robotics, self-organizing systems, artificial life, construction automation, structural engineering, architecture, bioengineering, biomaterials, and molecular and cellular biology. In these disciplines, developments relevant to biohybrid construction and living buildings are in the early stages, and typically are not exchanged between disciplines. We, therefore, consider this review useful to the future development of biohybrid engineering for this highly interdisciplinary application.publishe

    Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation

    Full text link
    Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.Comment: 30 pages, 5 figures, under second revie
    • …
    corecore