1,103 research outputs found
Artificial Intelligence for Automated Design of Elevator Systems
Configuration and design of complex products represents a challenge
in many application fields. The designer must take into account many
different aspects and make decisions typically driven by experience while
taking into account performance constraints and costs. Methods and tools
for design automation represents a viable solution to such complex decision
problems, giving also the possibility to optimize the performance of the final
product on particular context-driven aspects. Artificial intelligence (AI)
algorithms can help in dealing with complexity and enhance the current
tools by supplying solutions in feasible time.
My research is concerned with the development and testing of different
artificial intelligence (AI) techniques to automate the design of elevators.
Elevator design is a problem with many interesting aspects like the need to
deal with a hybrid search state space (continuous and discrete variables)
constrained by design requirements and safety regulations. The study,
design and integration of AI techniques in this particular application field
can provide the end user with design automation tools that output feasible
solutions within acceptable computation times.
My research considered AI techniques such as special-purpose heuristic
search, genetic algorithms and constraint satisfaction to solve elevator
configuration problems. I tested them considering different setups and
parts of the whole design process. I have also implemented a tool L IFT C REATE ,
available as a web application. L IFT C REATE leverages the findings of
my research to automate the design of elevators and, to the best of my
knowledge, there is currently no similar tool publicly available from either
academia or industry that provides the same level of design automation
Automated Design of Elevator Systems: Experimenting with Constraint-Based Approaches
System configuration and design is a well-established topic
in AI. While many successful applications exists, there are still areas of
manufacturing where AI techniques find little or no application. We focus
on one such area, namely building and installation of elevator systems,
for which we are developing an automated design and configuration tool.
The questions that we address in this paper are: (i) What are the best
ways to encode some subtasks of elevator design into constraint-based
representations? (ii) What are the best tools available to solve the encodings? We contribute an empirical analysis to address these questions
in our domain of interest, as well as the complete set of benchmarks to
foster further researc
Experimenting with Constraint Programming Techniques in Artificial Intelligence: Automated System Design and Verification of Neural Networks
This thesis focuses on the application of Constraint Satisfaction and Optimization techniques
in two Artificial Intelligence (AI) domains: automated design of elevator systems and
verification of Neural Networks (NNs). The three main areas of interest for my work
are (i) the languages for defining the constraints for the systems, (ii) the algorithms and
encodings that enable solving the problems considered and (iii) the tools that implement
such algorithms.
Given the expressivity of the domain description languages and the availability of effective
tools, several problems in diverse application fields have been solved successfully using
constraint satisfaction techniques. The two case studies herewith presented are no exception,
even if they entail different challenges in the adoption of such techniques. Automated design
of elevator systems not only requires encoding of feasibility (hard) constraints, but should
also take into account design preferences, which can be expressed in terms of cost functions
whose optimal or near-optimal value characterizes âgoodâ design choices versus âpoorâ ones.
Verification of NNs (and other machine-learned implements) requires solving large-scale
constraint problems which may become the main bottlenecks in the overall verification
procedure.
This thesis proposes some ideas for tackling such challenges, including encoding techniques
for automated design problems and new algorithms for handling the optimization
problems arising from verification of NNs. The proposed algorithms and techniques are evaluated
experimentally by developing tools that are made available to the research community
for further evaluation and improvement
Essays On Perioperative Services Problems In Healthcare
One of the critical challenges in healthcare operations management is to efficiently utilize the expensive resources needed while maintaining the quality of care provided. Simulation and optimization methods can be effectively used to provide better healthcare services. This can be achieved by developing models to minimize patient waiting times, minimize healthcare supply chain and logistics costs, and maximize access. In this proposal, we study some of the important problems in healthcare operations management. More specifically, we focus on perioperative services and study scheduling of operating rooms (ORs) and management of necessary resources such as staff, equipment, and surgical instruments. We develop optimization and simulation methods to coordinate material handling decisions, inventory management, and OR scheduling.
In Chapter 1 of this dissertation, we investigate material handling services to improve the flow of surgical materials in hospitals. The ORs require timely supply of surgical materials such as surgical instruments, linen, and other additional equipment required to perform the surgeries. The availability of surgical instruments at the right location is crucial to both patient safety and cost reduction in hospitals. Similarly, soiled material must also be disposed of appropriately and quickly. Hospitals use automated material handling systems to perform these daily tasks, minimize workforce requirements, reduce risk of contamination, and reduce workplace injuries. Most of the literature related to AGV systems focuses on improving their performance in manufacturing settings. In the last 20 years, several articles have addressed issues relevant to healthcare systems. This literature mainly focuses on improving the design and management of AGV systems to handle the specific challenges faced in hospitals, such as interactions with patients, staff, and elevators; adhering to safety standards and hygiene, etc.
In Chapter 1, we focus on optimizing the delivery of surgical instrument case carts from material departments to ORs through automated guided vehicles (AGV). We propose a framework that integrates data analysis with system simulation and optimization. We test the performance of the proposed framework through a case study developed using data from a partnering hospital, Greenville Memorial Hospital (GMH) in South Carolina. Through an extensive set of simulation experiments, we investigate whether performance measures, such as travel time and task completion time, improve after a redesign of AGV pathways. We also study the impact of fleet size on these performance measures and use simulation-optimization to evaluate the performance of the system for different fleet sizes. A pilot study was conducted at GMH to validate the results of our analysis. We further evaluated different policies for scheduling the material handling activities to assess their impact on delays and the level of inventory required. Reducing the inventory level of an instrument may negatively impact the flexibility in scheduling surgeries, cause delays, and therefore, reduce the service level provided. On the other hand, increasing inventory levels may not necessarily eliminate the delays since some delays occur because of inefficiencies in the material handling processes. Hospitals tend to maintain large inventories to ensure that the required instruments are available for scheduled surgery. Typically, the inventory level of surgical instruments is determined by the total number of surgeries scheduled in a day, the daily schedule of surgeries that use the same instrument, the processing capacity of the central sterile storage division (CSSD), and the schedule of material handling activities. Using simulation-optimization tools, we demonstrate that integrating decisions of material handling activities with inventory management has the potential to reduce the cost of the system.
In Chapter 2 we focus on coordinating OR scheduling decisions with efficient management of surgical instruments. Hospitals pay more attention to OR scheduling. This is because a large portion of hospitals\u27 income is due to surgical procedures. Inventory management of decisions follows the OR schedules. Previous work points to the cost savings and benefits of optimizing the OR scheduling process. However, based on our review of the literature, only a few articles discuss the inclusion of instrument inventory-related decisions in OR schedules. Surgical instruments are classified as (1) owned by the hospital and (2) borrowed from other hospitals or vendors. Borrowed instruments incur rental costs that can be up to 12-25\% of the listed price of the surgical instrument. A daily schedule of ORs determines how many rental instruments would be required to perform all surgeries in a timely manner. A simple strategy used in most hospitals is to first schedule the ORs, followed by determining the instrument assignments. However, such a strategy may result in low utilization of surgical instruments owned by hospitals. Furthermore, creating an OR schedule that efficiently uses available surgical instruments is a challenging problem. The problem becomes even more challenging in the presence of material handling delays, stochastic demand, and uncertain surgery duration. In this study, we propose an alternative scheduling strategy in which the OR scheduling and inventory management decisions are coordinated. More specifically, we propose a mixed-integer programming model that integrates instrument assignment decisions with OR scheduling to minimize costs. This model determines how many ORs to open, determines the schedule of ORs, and also identifies the instrument assignments for each surgery. If the level of instrument inventory cannot meet the surgical requirements, our model allows instruments to be rented at a higher cost. We introduce and evaluate the solution methods for this problem. We propose a Lagrangean decomposition-based heuristic, which is an iterative procedure. This heuristic separates the scheduling problem from the inventory assignment problem. These subproblems are computationally easier to solve and provide a lower bound on the optimal cost of the integrated OR scheduling problem. The solution of the scheduling subproblem is used to generate feasible solutions in every iteration. We propose two alternatives to find feasible solutions to our problem. These alternatives provide an upper bound on the cost of the integrated scheduling problem. We conducted a thorough sensitivity analysis to evaluate the impact of different parameters, such as the length of the scheduling horizon, the number of ORs that can be used in parallel, the number of surgeries, and various cost parameters on the running time and quality of the solution. Using a case study developed at GMH, we demonstrate that integrating OR scheduling decisions with inventory management has the potential to reduce the cost of the system.
The objective of Chapter 3 is to develop quick and efficient algorithms to solve the integrated OR scheduling and inventory management problem, and generate optimal/near-optimal solutions that increase the efficiency of GMH operations. In Chapter 2, we introduced the integrated OR scheduling problem which is a combinatorial optimization problem. As such, the problem is challenging to solve. We faced these challenges when trying to solve the problem directly using the Gurobi solver. The solutions obtained via construction heuristics were much farther from optimality while the Lagrangean decomposition-based heuristics take several hours to find good solutions for large-sized problems. In addition, those methods are iterative procedures and computationally expensive. These challenges have motivated the development of metaheuristics to solve OR scheduling problems, which have been shown to be very effective in solving other combinatorial problems in general and scheduling problems in particular. In Chapter 3, we adopt a metaheuristic, Tabu search, which is a versatile heuristic that is used to solve many different types of scheduling problems. We propose an improved construction heuristic to generate an initial solution. This heuristic identifies the number if ORs to be used and then the assignment of surgeries to ORs. In the second step, this heuristic identifies instrument-surgery assignments based on a first-come, first-serve basis. The proposed Tabu search method improves upon this initial solution. To explore different areas of the feasible region, we propose three neighborhoods that are searched one after the other. For each neighborhood, we create a preferred attribute candidate list which contains solutions that have attributes of good solutions. The solutions on this list are evaluated first before examining other solutions in the neighborhood. The solutions obtained with Tabu search are compared with the lower and upper bounds obtained in Chapter \ref{Ch2}. Using a case study developed at GMH, we demonstrate that high-quality solutions can be obtained by using very little computational time
Recommended from our members
Design as interactions of problem framing and problem solving: a formal and empirical basis for problem framing in design
In this thesis, I present, illustrate and empirically validate a novel approach to modelling and explaining design process. The main outcome of this work is the formal definition of the problem framing, and the formulation of a recursive model of framing in design. The model (code-named RFD), represents a formalisation of a grey area in the science of design, and sees the design process as a recursive interaction of problem framing and problem solving.
The proposed approach is based upon a phenomenon introduced in cognitive science and known as (reflective) solution talkback. Previously, there were no formalisations of the knowledge interactions occurring within this complex reasoning operation. The recursive model is thus an attempt to express the existing knowledge in a formal and structured manner. In spite of rather abstract, knowledge level on which the model is defined, it is a firm step in the clarification of design process. The RFD model is applied to the knowledge-level description of the conducted experimental study that is annotated and analysed in the defined terminology. Eventually, several schemas implied by the model are identified, exemplified, and elaborated to reflect the empirical results.
The model features the mutual interaction of predicates âspecifiesâ and âsatisfiesâ. The first asserts that a certain set of explicit statements is sufficient for expressing relevant desired states the design is aiming to achieve. The validity of predicate âspecifiesâ might not be provable directly in any problem solving theory. A particular specification can be upheld or rejected only by drawing upon the validity of a complementary predicate âsatisfiesâ and the (un-)acceptability of the considered candidate solution (e.g. technological artefact, product). It is the role of the predicate âsatisfiesâ to find and derive such a candidate solution. The predicates âspecifiesâ and âsatisfiesâ are contextually bound and can be evaluated only within a particular conceptual frame. Thus, a solution to the design problem is sound and admissible with respect to an explicit commitment to a particular specification and design frame. The role of the predicate âacceptableâ is to compare the admissible solutions and frames against the ârealâ design problem. As if it answered the question: âIs this solution really what I wanted/intended?â
Furthermore, I propose a set of principled schemas on the conceptual (knowledge) level with an aim to make the interactive patterns of the design process explicit. These conceptual schemas are elicited from the rigorous experiments that utilised the structured and principled approach to recording the designerâs conceptual reasoning steps and decisions. They include the refinement of an explicit problem specification within a conceptual frame; the refinement of an explicit problem specification using a re-framed reference; and the conceptual re-framing (i.e. the identification and articulation of new conceptual terms)
Since the conceptual schemas reflect the sequence of the âtypicalâ decisions the designer may make during the design process, there is no single, symbol-level method for the implementation of these conceptual patterns. Thus, when one decides to follow the abstract patterns and schemas, this abstract model alone can foster a principled design on the knowledge level. It must be acknowledged that for the purpose of computer-based support, these abstract schemas need to be turned into operational models and consequently suitable methods. However, such operational perspective was beyond the time and resource constraints placed on this research
Recommended from our members
Modular supervisory controller for complex systems
Automation for the oil and gas industry is driven by the need to improve efficiency, productivity, consistency, and personnel safety, while reducing cost. Fully automated systems alleviate the physical toll on human operators and allow them to focus on monitoring unsafe well events and machinery maintenance. Complex systems like drilling rigs and snubbing units require supervisory controllers that can safely coordinate equipment and processes, overcome interoperability challenges and allow for functional scalability without sacrificing safety, security, and consistency of operations. The primary objective of this report is to explore the feasibility of developing a modular supervisory controller architecture which addresses these concerns by modifying and extending existing architectures. Such modifications include the use of non-homogeneous models in sub-system modules, including discrete event models for control and physics-based models for collision avoidance, addition of a system compilation module (Meta Module) to identify simple design errors, and implementation of an algorithm for synthesis of modules and filters to replace missing sub-systems. This report discusses the implementation results of the modular supervisory control architecture (modMFSM) on a simplified two-machine drilling system for assessment of design practices. Simulations for three test cases were executed to assess the ability of the controller to correctly perform error-free operations, detect and react to possible collisions, and adapt to missing equipment. The report then discusses the possibilities of extending the modMFSM architecture to control large complex systems such as drilling rigs, using snubbing operations as an example.Mechanical Engineerin
Robotized Warehouse Systems: Developments and Research Opportunities
Robotized handling systems are increasingly applied in distribution centers. They require little space, provide flexibility in managing varying demand requirements, and are able to work 24/7. This makes them particularly fit for e-commerce operations. This paper reviews new categories of robotized handling systems, such as the shuttle-based storage and retrieval systems, shuttle-based compact storage systems, and robotic mobile fulfillment systems. For each system, we categorize the literature in three groups: system analysis, design optimization, and operations planning and control. Our focus is to identify the research issue and OR modeling methodology adopted to analyze the problem. We find that many new robotic systems and applications have hardly been studied in academic literature, despite their increasing use in practice. Due to unique system features (such as autonomous control, networked and dynamic operation), new models and methods are needed to address the design and operational control challenges for such systems, in particular, for the integration of subsystems. Integrated robotized warehouse systems will form the next category of warehouses. All vital warehouse design, planning and control logic such as methods to design layout, storage and order picking system selection, storage slotting, order batching, picker routing, and picker to order assignment will have to be revisited for new robotized warehouses
Recommended from our members
Reusable components for knowledge modelling
In this work I illustrate an approach to the development of a library of problem solving components for knowledge modelling. This approach is based on an epistemological modelling framework, the Task/Method/Domain/Application (TMDA) model, and on a principled methodology, which provide an integrated view of both library construction and application development by reuse.
The starting point of the proposed approach is given by a task ontology. This formalizes a conceptual viewpoint over a class of problems, thus providing a task-specific framework, which can be used to drive the construction of a task model through a process of model-based knowledge acquisition. The definitions in the task ontology provide the initial elements of a task-specific library of problem solving components.
In order to move from problem specification to problem solving, a generic, i.e. taskindependent, model of problem solving as search is introduced, and instantiated in terms of the concepts in the relevant task ontology, say T. The result is a task-specific, but method-independent, problem solving model. This generic problem solving model provides the foundation from which alternative problem solving methods for a class of tasks can be defined. Specifically, the generic problem solving model provides i) a highly generic method ontology, say M; ii) a set of generic building blocks (generic tasks), which can be used to construct task-specific problem solving methods; and iii) an initial problem solving method, which can be characterized as the most generic problem solving method, which subscribes to M and is applicable to T. More specific problem solving methods can then be (re-)constructed from the generic problem solving model through a process of method/ontology specialization and method-to-task application.
The resulting library of reusable components enjoys a clear theoretical basis and provides robust support for reuse. In the thesis I illustrate the approach in the area of parametric design
Uses and applications of artificial intelligence in manufacturing
The purpose of the THESIS is to provide engineers and personnels with a overview of the concepts that underline Artificial Intelligence and Expert Systems. Artificial Intelligence is concerned with the developments of theories and techniques required to provide a computational engine with the abilities to perceive, think and act, in an intelligent manner in a complex environment.
Expert system is branch of Artificial Intelligence where the methods of reasoning emulate those of human experts. Artificial Intelligence derives it\u27s power from its ability to represent complex forms of knowledge, some of it common sense, heuristic and symbolic, and the ability to apply the knowledge in searching for solutions.
The Thesis will review : The components of an intelligent system, The basics of knowledge representation, Search based problem solving methods, Expert system technologies, Uses and applications of AI in various manufacturing areas like Design, Process Planning, Production Management, Energy Management, Quality Assurance, Manufacturing Simulation, Robotics, Machine Vision etc.
Prime objectives of the Thesis are to understand the basic concepts underlying Artificial Intelligence and be able to identify where the technology may be applied in the field of Manufacturing Engineering
Design and Implementation of High QoS 3D-NoC using Modified Double Particle Swarm Optimization on FPGA
One technique to overcome the exponential growth bottleneck is to increase the number of cores on a processor, although having too many cores might cause issues including chip overheating and communication blockage. The problem of the communication bottleneck on the chip is presently effectively resolved by networks-on-chip (NoC). A 3D stack of chips is now possible, thanks to recent developments in IC manufacturing techniques, enabling to reduce of chip area while increasing chip throughput and reducing power consumption. The automated process associated with mapping applications to form three-dimensional NoC architectures is a significant new path in 3D NoC research. This work proposes a 3D NoC partitioning approach that can identify the 3D NoC region that has to be mapped. A double particle swarm optimization (DPSO) inspired algorithmic technique, which may combine the characteristics having neighbourhood search and genetic architectures, also addresses the challenge of a particle swarm algorithm descending into local optimal solutions. Experimental evidence supports the claim that this hybrid optimization algorithm based on Double Particle Swarm Optimisation outperforms the conventional heuristic technique in terms of output rate and loss in energy. The findings demonstrate that in a network of the same size, the newly introduced router delivers the lowest loss on the longest path. Three factors, namely energy, latency or delay, and throughput, are compared between the suggested 3D mesh ONoC and its 2D version. When comparing power consumption between 3D ONoC and its electronic and 2D equivalents, which both have 512 IP cores, it may save roughly 79.9% of the energy used by the electronic counterpart and 24.3% of the energy used by the latter. The network efficiency of the 3D mesh ONoC is simulated by DPSO in a variety of configurations. The outcomes also demonstrate an increase in performance over the 2D ONoC. As a flexible communication solution, Network-On-Chips (NoCs) have been frequently employed in the development of multiprocessor system-on-chips (MPSoCs). By outsourcing their communication activities, NoCs permit on-chip Intellectual Property (IP) cores to communicate with one another and function at a better level. The important components in assigning application duties, distributing the work to the IPs, and coordinating communication among them are mapping and scheduling methods. This study aims to present an entirely advanced form of research in the area of 3D NoC mapping and scheduling applications, grouping the results according to various parameters and offering several suggestions for further research
- âŠ