758 research outputs found

    2D multi-objective placement algorithm for free-form components

    Get PDF
    This article presents a generic method to solve 2D multi-objective placement problem for free-form components. The proposed method is a relaxed placement technique combined with an hybrid algorithm based on a genetic algorithm and a separation algorithm. The genetic algorithm is used as a global optimizer and is in charge of efficiently exploring the search space. The separation algorithm is used to legalize solutions proposed by the global optimizer, so that placement constraints are satisfied. A test case illustrates the application of the proposed method. Extensions for solving the 3D problem are given at the end of the article.Comment: ASME 2009 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, San Diego : United States (2009

    Crowdsourcing solutions to 2D irregular strip packing problems from Internet workers

    Get PDF
    Many industrial processes require the nesting of 2D profiles prior to the cutting, or stamping, of components from raw sheet material. Despite decades of sustained academic effort algorithmic solutions are still sub-optimal and produce results that can frequently be improved by manual inspection. However the Internet offers the prospect of novel ‘human-in-the-loop’ approaches to nesting problems, that uses online workers to produce packing efficiencies beyond the reach of current CAM packages. To investigate the feasibility of such an approach this paper reports on the speed and efficiency of online workers engaged in the interactive nesting of six standard benchmark datasets. To ensure the results accurately characterise the diverse educational and social backgrounds of the many different labour forces available online, the study has been conducted with subjects based in both Indian IT service (i.e. Rural BPOs) centres and a network of homeworkers in northern Scotland. The results (i.e. time and packing efficiency) of the human workers are contrasted with both the baseline performance of a commercial CAM package and recent research results. The paper concludes that online workers could consistently achieve packing efficiencies roughly 4% higher than the commercial based-line established by the project. Beyond characterizing the abilities of online workers to nest components, the results also make a contribution to the development of algorithmic solutions by reporting new solutions to the benchmark problems and demonstrating methods for assessing the packing strategy employed by the best workers

    Dealing with Nonregular Shapes Packing

    Get PDF
    This paper addresses the irregular strip packing problem, a particular two-dimensional cutting and packing problem in which convex/nonconvex shapes (polygons) have to be packed onto a single rectangular object. We propose an approach that prescribes the integration of a metaheuristic engine (i.e., genetic algorithm) and a placement rule (i.e., greedy bottom-left). Moreover, a shrinking algorithm is encapsulated into the metaheuristic engine to improve good quality solutions. To accomplish this task, we propose a no-fit polygon based heuristic that shifts polygons closer to each other. Computational experiments performed on standard benchmark problems, as well as practical case studies developed in the ambit of a large textile industry, are also reported and discussed here in order to testify the potentialities of proposed approach

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    Algorithms for Geometric Optimization and Enrichment in Industrialized Building Construction

    Get PDF
    The burgeoning use of industrialized building construction, coupled with advances in digital technologies, is unlocking new opportunities to improve the status quo of construction projects being over-budget, delayed and having undesirable quality. Yet there are still several objective barriers that need to be overcome in order to fully realize the full potential of these innovations. Analysis of literature and examples from industry reveal the following notable barriers: (1) geometric optimization methods need to be developed for the stricter dimensional requirements in industrialized construction, (2) methods are needed to preserve model semantics during the process of generating an updated as-built model, (3) semantic enrichment methods are required for the end-of-life stage of industrialized buildings, and (4) there is a need to develop pragmatic approaches for algorithms to ensure they achieve required computational efficiency. The common thread across these examples is the need for developing algorithms to optimize and enrich geometric models. To date, a comprehensive approach paired with pragmatic solutions remains elusive. This research fills this gap by presenting a new approach for algorithm development along with pragmatic implementations for the industrialized building construction sector. Computational algorithms are effective for driving the design, analysis, and optimization of geometric models. As such, this thesis develops new computational algorithms for design, fabrication and assembly, onsite construction, and end-of-life stages of industrialized buildings. A common theme throughout this work is the development and comparison of varied algorithmic approaches (i.e., exact vs. approximate solutions) to see which is optimal for a given process. This is implemented in the following ways. First, a probabilistic method is used to simulate the accumulation of dimensional tolerances in order to optimize geometric models during design. Second, a series of exact and approximate algorithms are used to optimize the topology of 2D panelized assemblies to minimize material use during fabrication and assembly. Third, a new approach to automatically update geometric models is developed whereby initial model semantics are preserved during the process of generating an as-built model. Finally, a series of algorithms are developed to semantically enrich geometric models to enable industrialized buildings to be disassembled and reused. The developments made in this research form a rational and pragmatic approach to addressing the existing challenges faced in industrialized building construction. Such developments are shown not only to be effective in improving the status quo in the industry (i.e., improving cost, reducing project duration, and improving quality), but also for facilitating continuous innovation in construction. By way of assessing the potential impact of this work, the proposed algorithms can reduce rework risk during fabrication and assembly (65% rework reduction in the case study for the new tolerance simulation algorithm), reduce waste during manufacturing (11% waste reduction in the case study for the new panel unfolding and nesting algorithms), improve accuracy and automation of as-built model generation (model error reduction from 50.4 mm to 5.7 mm in the case study for the new parametric BIM updating algorithms), reduce lifecycle cost for adapting industrialized buildings (15% reduction in capital costs in the computational building configurator) and reducing lifecycle impacts for reusing structural systems from industrialized buildings (between 54% to 95% reduction in average lifecycle impacts for the approach illustrated in Appendix B). From a computational standpoint, the novelty of the algorithms developed in this research can be described as follows. Complex geometric processes can be codified solely on the innate properties of geometry – that is, by parameterizing geometry and using methods such as combinatorial optimization, topology can be optimized and semantics can be automatically enriched for building assemblies. Employing the use of functional discretization (whereby continuous variable domains are converted into discrete variable domains) is shown to be highly effective for complex geometric optimization approaches. Finally, the algorithms encapsulate and balance the benefits posed by both parametric and non-parametric schemas, resulting in the ability to achieve both high representational accuracy and semantically rich information (which has previously not been achieved or demonstrated). In summary, this thesis makes several key improvements to industrialized building construction. One of the key findings is that rather than pre-emptively determining the best suited algorithm for a given process or problem, it is often more pragmatic to derive both an exact and approximate solution and then decide which is optimal to use for a given process. Generally, most tasks related to optimizing or enriching geometric models is best solved using approximate methods. To this end, this research presents a series of key techniques that can be followed to improve the temporal performance of algorithms. The new approach for developing computational algorithms and the pragmatic demonstrations for geometric optimization and enrichment are expected to bring the industry forward and solve many of the current barriers it faces

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), Covilhã, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Corporate Smart Content Evaluation

    Get PDF
    Nowadays, a wide range of information sources are available due to the evolution of web and collection of data. Plenty of these information are consumable and usable by humans but not understandable and processable by machines. Some data may be directly accessible in web pages or via data feeds, but most of the meaningful existing data is hidden within deep web databases and enterprise information systems. Besides the inability to access a wide range of data, manual processing by humans is effortful, error-prone and not contemporary any more. Semantic web technologies deliver capabilities for machine-readable, exchangeable content and metadata for automatic processing of content. The enrichment of heterogeneous data with background knowledge described in ontologies induces re-usability and supports automatic processing of data. The establishment of “Corporate Smart Content” (CSC) - semantically enriched data with high information content with sufficient benefits in economic areas - is the main focus of this study. We describe three actual research areas in the field of CSC concerning scenarios and datasets applicable for corporate applications, algorithms and research. Aspect- oriented Ontology Development advances modular ontology development and partial reuse of existing ontological knowledge. Complex Entity Recognition enhances traditional entity recognition techniques to recognize clusters of related textual information about entities. Semantic Pattern Mining combines semantic web technologies with pattern learning to mine for complex models by attaching background knowledge. This study introduces the afore-mentioned topics by analyzing applicable scenarios with economic and industrial focus, as well as research emphasis. Furthermore, a collection of existing datasets for the given areas of interest is presented and evaluated. The target audience includes researchers and developers of CSC technologies - people interested in semantic web features, ontology development, automation, extracting and mining valuable information in corporate environments. The aim of this study is to provide a comprehensive and broad overview over the three topics, give assistance for decision making in interesting scenarios and choosing practical datasets for evaluating custom problem statements. Detailed descriptions about attributes and metadata of the datasets should serve as starting point for individual ideas and approaches
    corecore