38 research outputs found

    Development of a platform for building design, validation, and optimization of modules transportation

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIn a world where each day is becoming more intertwined with technology, numerous companies strive to discover solutions for new project developments or enhance existing complex systems. Their aim is to reduce material wastage, energy consumption, labor dependency, project timelines, and overall costs. A recent trend in civil construction is to use pre-fabricated components such as walls, beams, columns, and other components. These components can be implemented in many different types of buildings and along with technology the system can save information about them, preventing risks during the construction, reducing costs for the companies, and improving the efficiency of the company as also being the producers of the components. The development of this API was carried out using the Python programming language, which facilitated rapid development. It leveraged the FastAPI framework, offering the flexibility to create diverse methods for validating structural and architectural regulations for buildings. The API also provides an estimate of the required pre-fabricated components for construction. Additionally, it incorporates optimization for cargo loading, a pivotal aspect of civil engineering. A genetic algorithm is employed to identify the optimal solution for fitting components within a container. This not only enhances delivery times but also reduces costs by avoiding unnecessary containers. As a comprehensive system designed for the market, this API includes endpoints for essential web app functionalities. It aims to prevent user errors and has been rigorously tested and approved by architects specialized in buildings featuring pre-fabricated components. Any identified errors have already been rectified.Num mundo onde a tecnologia está cada vez mais interligada com o nosso dia a dia, muitas empresas estão em busca de soluções para novos projetos ou para aprimorar sistemas complexos já existentes. O objetivo é reduzir o desperdício de materiais, o consumo de energia, a dependência de mão de obra, os prazos de projeto e os custos gerais. Uma tendência recente na construção civil é o uso de componentes pré-fabricados, como paredes, vigas, colunas e outros elementos. Esses componentes podem ser aplicados em diversos tipos de edifícios e, juntamente com a tecnologia, o sistema pode armazenar informações sobre eles, prevenindo riscos durante a construção, reduzindo custos para as empresas e melhorando sua eficiência, bem como a eficiência dos produtores de componentes. O desenvolvimento desta API foi realizado usando a linguagem de programação Python, o que possibilitou um desenvolvimento rápido. Ela aproveitou o framework FastAPI, oferecendo a flexibilidade para criar diversos métodos de validação de regulamentos estruturais e arquitetônicos para edifícios. A API também fornece uma estimativa dos componentes pré-fabricados necessários para a construção. Além disso, ela incorpora a otimização para o carregamento de carga, um aspecto fundamental na engenharia civil. Um algoritmo genético é utilizado para identificar a solução ideal para acomodar os componentes dentro de um contêiner. Isso não apenas melhora os prazos de entrega, mas também reduz os custos, evitando contêineres desnecessários. Como um sistema abrangente projetado para o mercado, esta API inclui endpoints para funcionalidades essenciais de aplicativos da web. Seu objetivo é evitar erros do usuário e ela foi rigorosamente testada e aprovada por arquitetos especializados em edifícios com componentes pré-fabricados. Quaisquer erros identificados já foram corrigidos

    Enhancing Networks via Virtualized Network Functions

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2019. Major: Computer Science. Advisor: Zhi-Li Zhang. 1 computer file (PDF); xii, 116 pages.In an era of ubiquitous connectivity, various new applications, network protocols, and online services (e.g., cloud services, distributed machine learning, cryptocurrency) have been constantly creating, underpinning many of our daily activities. Emerging demands for networks have led to growing traffic volume and complexity of modern networks, which heavily rely on a wide spectrum of specialized network functions (e.g., Firewall, Load Balancer) for performance, security, etc. Although (virtual) network functions (VNFs) are widely deployed in networks, they are instantiated in an uncoordinated manner failing to meet growing demands of evolving networks. In this dissertation, we argue that networks equipped with VNFs can be designed in a fashion similar to how computer software is today programmed. By following the blueprint of joint design over VNFs, networks can be made more effective and efficient. We begin by presenting Durga, a system fusing wide area network (WAN) virtualization on gateway with local area network (LAN) virtualization technology. It seamlessly aggregates multiple WAN links into a (virtual) big pipe for better utilizing WAN links and also provides fast fail-over thus minimizing application performance degradation under WAN link failures. Without the support from LAN virtualization technology, existing solutions fail to provide high reliability and performance required by today’s enterprise applications. We then study a newly standardized protocol, Multipath TCP (MPTCP), adopted in Durga, showing the challenge of associating MPTCP subflows in network for the purpose of boosting throughput and enhancing security. Instead of designing a customized solution in every VNF to conquer this common challenge (making VNFs aware of MPTCP), we implement an online service named SAMPO to be readily integrated into VNFs. Following the same principle, we make an attempt to take consensus as a service in software-defined networks. We illustrate new network failure scenarios that are not explicitly handled by existing consensus algorithms such as Raft, thereby severely affecting their correct or efficient operations. Finally, we re-consider VNFs deployed in a network from the perspective of network administrators. A global view of deployed VNFs brings new opportunities for performance optimization over the network, and thus we explore parallelism in service function chains composing a sequence of VNFs that are typically traversed in-order by data flows

    Developing a multi-methodological approach to hospital operating theatre scheduling

    No full text
    Operating theatres and surgeons are among the most expensive resources in any hospital, so it is vital that they are used efficiently. Due to the complexity of the challenges involved in theatre scheduling we split the problem into levels and address the tactical and day-to-day scheduling problems.Cognitive mapping is used to identify the important factors to consider in theatre scheduling and their interactions. This allows development and testing of our understanding with hospital staff, ensuring that the aspects of theatre scheduling they consider important are included in the quantitative modelling.At the tactical level, our model assists hospitals in creating new theatre timetables, which take account of reducing the maximum number of beds required, surgeons’ preferences, surgeons’ availability, variations in types of theatre and their suitability for different types of surgery, limited equipment availability and varying the length of the cycle over which the timetable is repeated. The weightings given to each of these factors can be varied allowing exploration of possible timetables.At the day-to-day scheduling level we focus on the advanced booking of individual patients for surgery. Using simulation a range of algorithms for booking patients are explored, with the algorithms derived from a mixture of scheduling literature and ideas from hospital staff. The most significant result is that more efficient schedules can be achieved by delaying scheduling as close to the time of surgery as possible, however, this must be balanced with the need to give patients adequate warning to make arrangements to attend hospital for their surgery.The different stages of this project present different challenges and constraints, therefore requiring different methodologies. As a whole this thesis demonstrates that a range of methodologies can be applied to different stages of a problem to develop better solutions

    Coarse-grain time sharing with advantageous overhead minimization for parallel job scheduling

    Get PDF
    Parallel job scheduling on cluster computers involves the usage of several strategies to maximize both the utilization of the hardware as well as the throughput at which jobs are processed. Another consideration is the response times, or how quickly a job finishes after submission. One possible solution toward achieving these goals is the use of preemption. Preemptive scheduling techniques involve an overhead cost typically associated with swapping jobs in and out of memory. As memory and data sets increase in size, overhead costs increase. Here is presented a technique for reducing the overhead incurred by swapping jobs in and out of memory as a result of preemption. This is done in the context of the Scojo-PECT preemptive scheduler. Additionally a design for expanding the existing Cluster Simulator to support analysis of scheduling overhead in preemptive scheduling techniques is presented. A reduction in the overhead incurred through preemptive scheduling by the application of standard fitting algorithms in a multi-state job allocation heuristic is shown

    Documents as functions

    Get PDF
    Treating variable data documents as functions over their data bindings opens opportunities for building more powerful, robust and flexible document architectures to meet the needs arising from the confluence of developments in document engineering, digital printing technologies and marketing analysis. This thesis describes a combination of several XML-based technologies both to represent and to process variable documents and their data, leading to extensible, high-quality and 'higher-order' document generation solutions. The architecture (DDF) uses XML uniformly throughout the documents and their processing tools with interspersing of different semantic spaces being achieved through namespacing. An XML-based functional programming language (XSLT) is used to describe all intra-document variability and for implementing most of the tools. Document layout intent is declared within a document as a hierarchical set of combinators attached to a tree-based graphical presentation. Evaluation of a document bound to an instance of data involves using a compiler to create an executable from the document, running this with the data instance as argument to create a new document with layout intent described, followed by resolution of that layout by an extensible layout processor. The use of these technologies, with design paradigms and coding protocols, makes it possible to construct documents that not only have high flexibility and quality, but also perform in higher-order ways. A document can be partially bound to data and evaluated, modifying its presentation and still remaining variably responsive to future data. Layout intent can be re-satisfied as presentation trees are modified by programmatic sections embedded within them. The key enablers are described and illustrated through example

    Analyzing management processes within a distributed team context : the case of a Canada-China construction project

    Get PDF
    Information flows in construction projects -- Distributed team contexts in construction -- Construction process reengineering and modeling -- Research process road map -- Case observation and data source -- PMBOK application and database establishment -- Modeling technique and unified modeling language (UML) -- Information flow patterns in distributed work -- Identification and analysis of critical construction processes -- Integration -- Procurement -- Materials

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
    corecore