1,019 research outputs found

    Next generation system and software architectures Challenges from future NASA exploration missions

    Get PDF
    AbstractThe four key objective properties of a system that are required of it in order for it to qualify as “autonomic” are now well-accepted—self-configuring, self-healing, self-protecting, and self-optimizing—together with the attribute properties—viz. self-aware, environment-aware, self-monitoring and self-adjusting. This paper describes the need for next generation system software architectures, where components are agents, rather than objects masquerading as agents, and where support is provided for self-* properties (both existing self-chop and emerging self-* properties). These are discussed as exhibited in NASA missions, and in particular with reference to a NASA concept mission, ANTS, which is illustrative of future NASA exploration missions based on the technology of intelligent swarms

    Self-Learning Cloud Controllers: Fuzzy Q-Learning for Knowledge Evolution

    Get PDF
    Cloud controllers aim at responding to application demands by automatically scaling the compute resources at runtime to meet performance guarantees and minimize resource costs. Existing cloud controllers often resort to scaling strategies that are codified as a set of adaptation rules. However, for a cloud provider, applications running on top of the cloud infrastructure are more or less black-boxes, making it difficult at design time to define optimal or pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions often is delegated to the cloud application. Yet, in most cases, application developers in turn have limited knowledge of the cloud infrastructure. In this paper, we propose learning adaptation rules during runtime. To this end, we introduce FQL4KE, a self-learning fuzzy cloud controller. In particular, FQL4KE learns and modifies fuzzy rules at runtime. The benefit is that for designing cloud controllers, we do not have to rely solely on precise design-time knowledge, which may be difficult to acquire. FQL4KE empowers users to specify cloud controllers by simply adjusting weights representing priorities in system goals instead of specifying complex adaptation rules. The applicability of FQL4KE has been experimentally assessed as part of the cloud application framework ElasticBench. The experimental results indicate that FQL4KE outperforms our previously developed fuzzy controller without learning mechanisms and the native Azure auto-scaling

    Design and Performance Analysis of Genetic Algorithms for Topology Control Problems

    Full text link
    In this dissertation, we present a bio-inspired decentralized topology control mechanism, called force-based genetic algorithm (FGA), where a genetic algorithm (GA) is run by each autonomous mobile node to achieve a uniform spread of mobile nodes and to provide a fully connected network over an unknown area. We present a formal analysis of FGA in terms of convergence speed, uniformity at area coverage, and Lyapunov stability theorem. This dissertation emphasizes the use of mobile nodes to achieve a uniform distribution over an unknown terrain without a priori information and a central control unit. In contrast, each mobile node running our FGA has to make its own movement direction and speed decisions based on local neighborhood information, such as obstacles and the number of neighbors, without a centralized control unit or global knowledge. We have implemented simulation software in Java and developed four different testbeds to study the effectiveness of different GA-based topology control frameworks for network performance metrics including node density, speed, and the number of generations that GAs run. The stochastic behavior of FGA, like all GA-based approaches, makes it difficult to analyze its convergence speed. We built metrically transitive homogeneous and inhomogeneous Markov chain models to analyze the convergence of our FGA with respect to the communication ranges of mobile nodes and the total number of nodes in the system. The Dobrushin contraction coefficient of ergodicity is used for measuring convergence speed for homogeneous and inhomogeneous Markov chain models of our FGA. Furthermore, convergence characteristic analysis helps us to choose the nearoptimal values for communication range, the number of mobile nodes, and the mean node degree before sending autonomous mobile nodes to any mission. Our analytical and experimental results show that our FGA delivers promising results for uniform mobile node distribution over unknown terrains. Since our FGA adapts to local environment rapidly and does not require global network knowledge, it can be used as a real-time topology controller for commercial and military applications

    Information Processor

    Get PDF
    How computational technology start to take place and gradually become being heavily involved/implemented in the design process of architectural design. In the architecture domain, not only the proportion of the assistance from computational techniques has been increasing exponentially, but also, the role they play has been gradually shifting from a supporting one to a generative one. No longer limited to being a complex mathematics calculator, computers, have become a ubiquitous necessity in our daily life and even influence the way we live. This, is especially true for the young generation who were born in this digital world, mainly referred to as the “Generation Z”. Business Insider, a fast-growing business media website, mentioned that “Gen Z-ers are digitally over-connected. They multitask across at least five screens daily and spend 41% of their time outside of school with computers or mobile devices, compared to 22% 10 years ago, according to theSparks & Honey report.” When Alan Turing first invented the room-sized “Turing Machine” to decipher Nazi codes, he couldn’t have expected that this giant machine could one day be put into one’s pocket and efficiently compute a million times more data. As compared to the era of tools, such as paper and pen, the computer, in today’s context has been heavily utilized and relied upon as a powerful instrument. This change is remarkable, considering the relatively short period of time, especially after 1981 when the first IBM personal computer was released (Mitchell, 1990). Architecture Design cannot be excluded from this inevitable technological tendency. Even the most conservative architecture firms are now required to deliver digital technical drawings to communicate amongst designers, clients, and construction firms in the present scenario. Incorporating computer technology in today’s context also provides young designers the opportunity to experiment with creating relatively complex geometry based architectural space. But before applying this powerful technology in architectural design, the crucial knowledge behind it that architects had to understand and realize was the manner and procedure of “Processing of Information”. Without information, the computer would be just lying on one’s desk as a useless cube, like a vehicle without a driver, or a body without a soul. The shifting roles of computer technology in architectural design are obviously defined by the manner of how designers interpret, digest and operate/process the streams of information flow

    Maple-Swarm: programming collective behavior for ensembles by extending HTN-planning

    Get PDF
    Programming goal-oriented behavior in collective adaptive systems is complex, requires high effort, and is failure-prone. If the system's user wants to deploy it in a real-world environment, hurdles get even higher: Programs urgently require to be situation-aware. With our framework Maple, we previously presented an approach for easing the act of programming such systems on the level of particular robot capabilities. In this paper, we extend our approach for ensemble programming with the possibility to address virtual swarm capabilities encapsulating collective behavior to whole groups of agents. By using the respective concepts in an extended version of hierarchical task networks and by adapting our self-organization mechanisms for executing plans resulting thereof, we can achieve that all agents, any agent, any other set of agents, or a swarm of agents execute (swarm) capabilities. Moreover, we extend the possibilities of expressing situation awareness during planning by introducing planning variables that can get modified at design-time or run-time as needed. We illustrate the possibilities with examples each. Further, we provide a graphical front-end offering the possibility to generate mission-specific problem domain descriptions for ensembles including a lightweight simulation for validating plans
    • 

    corecore