2,307 research outputs found

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Human-Centered Automation for Resilience in Acquiring Construction Field Information

    Get PDF
    abstract: Resilient acquisition of timely, detailed job site information plays a pivotal role in maintaining the productivity and safety of construction projects that have busy schedules, dynamic workspaces, and unexpected events. In the field, construction information acquisition often involves three types of activities including sensor-based inspection, manual inspection, and communication. Human interventions play critical roles in these three types of field information acquisition activities. A resilient information acquisition system is needed for safer and more productive construction. The use of various automation technologies could help improve human performance by proactively providing the needed knowledge of using equipment, improve the situation awareness in multi-person collaborations, and reduce the mental workload of operators and inspectors. Unfortunately, limited studies consider human factors in automation techniques for construction field information acquisition. Fully utilization of the automation techniques requires a systematical synthesis of the interactions between human, tasks, and construction workspace to reduce the complexity of information acquisition tasks so that human can finish these tasks with reliability. Overall, such a synthesis of human factors in field data collection and analysis is paving the path towards “Human-Centered Automation” (HCA) in construction management. HCA could form a computational framework that supports resilient field data collection considering human factors and unexpected events on dynamic job sites. This dissertation presented an HCA framework for resilient construction field information acquisition and results of examining three HCA approaches that support three use cases of construction field data collection and analysis. The first HCA approach is an automated data collection planning method that can assist 3D laser scan planning of construction inspectors to achieve comprehensive and efficient data collection. The second HCA approach is a Bayesian model-based approach that automatically aggregates the common sense of people from the internet to identify job site risks from a large number of job site pictures. The third HCA approach is an automatic communication protocol optimization approach that maximizes the team situation awareness of construction workers and leads to the early detection of workflow delays and critical path changes. Data collection and simulation experiments extensively validate these three HCA approaches.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources

    Get PDF
    Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period. In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection. To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given

    A multi-agent approach for design consistency checking

    Get PDF
    The last decade has seen an explosion of interest to advanced product development methods, such as Computer Integrated Manufacture, Extended Enterprise and Concurrent Engineering. As a result of the globalization and future distribution of design and manufacturing facilities, the cooperation amongst partners is becoming more challenging due to the fact that the design process tends to be sequential and requires communication networks for planning design activities and/or a great deal of travel to/from designers' workplaces. In a virtual environment, teams of designers work together and use the Internet/Intranet for communication. The design is a multi-disciplinary task that involves several stages. These stages include input data analysis, conceptual design, basic structural design, detail design, production design, manufacturing processes analysis, and documentation. As a result, the virtual team, normally, is very changeable in term of designers' participation. Moreover, the environment itself changes over time. This leads to a potential increase in the number of design. A methodology of Intelligent Distributed Mismatch Control (IDMC) is proposed to alleviate some of the related difficulties. This thesis looks at the Intelligent Distributed Mismatch Control, in the context of the European Aerospace Industry, and suggests a methodology for a conceptual framework based on a multi-agent architecture. This multi-agent architecture is a kernel of an Intelligent Distributed Mismatch Control System (IDMCS) that aims at ensuring that the overall design is consistent and acceptable to all participating partners. A Methodology of Intelligent Distributed Mismatch Control is introduced and successfully implemented to detect design mismatches in complex design environments. A description of the research models and methods for intelligent mismatch control, a taxonomy of design mismatches, and an investigation into potential applications, such as aerospace design, are presented. The Multi-agent framework for mismatch control is developed and described. Based on the methodology used for the IDMC application, a formal framework for a multi-agent system is developed. The Methods and Principles are trialed out using an Aerospace Distributed Design application, namely the design of an A340 wing box. The ontology of knowledge for agent-based Intelligent Distributed Mismatch Control System is introduced, as well as the distributed collaborative environment for consortium based projects

    A Hybrid multi-agent architecture and heuristics generation for solving meeting scheduling problem

    Get PDF
    Agent-based computing has attracted much attention as a promising technique for application domains that are distributed, complex and heterogeneous. Current research on multi-agent systems (MAS) has become mature enough to be applied as a technology for solving problems in an increasingly wide range of complex applications. The main formal architectures used to describe the relationships between agents in MAS are centralised and distributed architectures. In computational complexity theory, researchers have classified the problems into the followings categories: (i) P problems, (ii) NP problems, (iii) NP-complete problems, and (iv) NP-hard problems. A method for computing the solution to NP-hard problems, using the algorithms and computational power available nowadays in reasonable time frame remains undiscovered. And unfortunately, many practical problems belong to this very class. On the other hand, it is essential that these problems are solved, and the only possibility of doing this is to use approximation techniques. Heuristic solution techniques are an alternative. A heuristic is a strategy that is powerful in general, but not absolutely guaranteed to provide the best (i.e. optimal) solutions or even find a solution. This demands adopting some optimisation techniques such as Evolutionary Algorithms (EA). This research has been undertaken to investigate the feasibility of running computationally intensive algorithms on multi-agent architectures while preserving the ability of small agents to run on small devices, including mobile devices. To achieve this, the present work proposes a new Hybrid Multi-Agent Architecture (HMAA) that generates new heuristics for solving NP-hard problems. This architecture is hybrid because it is "semi-distributed/semi-centralised" architecture where variables and constraints are distributed among small agents exactly as in distributed architectures, but when the small agents become stuck, a centralised control becomes active where the variables are transferred to a super agent, that has a central view of the whole system, and possesses much more computational power and intensive algorithms to generate new heuristics for the small agents, which find optimal solution for the specified problem. This research comes up with the followings: (1) Hybrid Multi-Agent Architecture (HMAA) that generates new heuristic for solving many NP-hard problems. (2) Two frameworks of HMAA have been implemented; search and optimisation frameworks. (3) New SMA meeting scheduling heuristic. (4) New SMA repair strategy for the scheduling process. (5) Small Agent (SMA) that is responsible for meeting scheduling has been developed. (6) “Local Search Programming” (LSP), a new concept for evolutionary approaches, has been introduced. (7) Two types of super-agent (LGP_SUA and LSP_SUA) have been implemented in the HMAA, and two SUAs (local and global optima) have been implemented for each type. (8) A prototype for HMAA has been implemented: this prototype employs the proposed meeting scheduling heuristic with the repair strategy on SMAs, and the four extensive algorithms on SUAs. The results reveal that this architecture is applicable to many different application domains because of its simplicity and efficiency. Its performance was better than many existing meeting scheduling architectures. HMAA can be modified and altered to other types of evolutionary approaches
    corecore