96 research outputs found

    Deterministic and stochastic scheduling: : Extended abstracts

    Get PDF

    Weighted Networks: Applications from Power grid construction to crowd control

    Get PDF
    Since their discovery in the 1950\u27s by Erdos and Renyi, network theory (the study of objects and their associations) has blossomed into a full-fledged branch of mathematics. Due to the network\u27s flexibility, diverse scientific problems can be reformulated as networks and studied using a common set of tools. I define a network G = (V,E) composed of two parts: (i) the set of objects V, called nodes, and (ii) set of relationships (associations) E, called links, that connect objects in V. We can extend the classic network of nodes and links by describing the intensity of these associations with weights. More formally, weighted networks augment the classic network with a function f(e) from links to the real line, uncovering powerful ways to model real-world applications. This thesis studies new ways to construct robust micro powergrids, mine people\u27s perceptions of causality on a social network, and proposes a new way to analyze crowdsourcing all in the context of the weighted network model. The current state of Earth\u27s ecosystem and intensifying climate calls on scientists to find new ways to harvest clean affordable energy. A microgrid, or neighborhood-scale powergrid built using renewable energy sources attached to personal homes, suggest one way to ameliorate this energy crisis. We can study the stability (robustness) of such a small-scale system with weighted networks. A novel use of weighted networks and percolation theory guides the safe and efficient construction of power lines (links, E) connecting a small set of houses (nodes, V) to one another and weights each power line by the distance between houses. This new look at the robustness of microgrid structures calls into question the efficacy of the traditional utility. The next study uses the twitter social network to compare and contrast causal language from everyday conversation. Collecting a set of 1 million tweets, we find a set of words (unigrams), parts of speech, named entities, and sentiment signal the use of informal causal language. Breaking a problem difficult for a computer to solve into many parts and distributing these tasks to a group of humans to solve is called Crowdsourcing. My final project asks volunteers to \u27reply\u27 to questions asked of them and \u27supply\u27 novel questions for others to answer. I model this \u27reply and supply\u27 framework as a dynamic weighted network, proposing new theories about this network\u27s behavior and how to steer it toward worthy goals. This thesis demonstrates novel uses of, enhances the current scientific literature on, and presents novel methodology for, weighted networks

    Stakeholders awareness of construction claims management models in Nigerian construction industry

    Get PDF
    Almost all construction disputes are products of inefficient management of construction claims. Several instruments have been developed in studies conducted in many countries of the World for the amelioration of this problem that is prominent at the execution phase of the construction projects. However, the achievement of dispute free construction process in the construction industry in Nigeria is still a mirage. Therefore, this study assessed the level of awareness and utilization of these instruments as well as the reasons for the present level of usage of these instruments. These objectives were achieved through a survey conducted on stakeholders engaged in building projects executed in Ondo State, Nigeria for a period of nine years. Data collected were analysed using percentile, mean score and Kruskal-Wallis K-test. Among the three groups, consultants had the highest level of awareness and were best at using the instruments for managing construction claims. Furthermore, the stakeholders were aware of four out of the eleven identified instruments whereas only two were used by them. In total, 42% of the participants opined that the main reason for low level of usage of the instruments is that, it is not convenient to use the instruments whereas 31% of them agreed that the instruments will not yield the expected results. The implication of this is that the much expected amicable settlement of construction claims dispute is still unattainable; due to inability of the stakeholders to apply the methodologies that can enable them achieve it. The study recommended that adequate sensitisation should be carried out by the professional bodies and the government agencies on the importance of usage of the frameworks, so as to ameliorate the problem of disputed construction claims.Keywords: Construction claims, Instruments, Level of awareness, Level of usage, Mode

    Resource Allocation in Networked and Distributed Environments

    Get PDF
    A central challenge in networked and distributed systems is resource management: how can we partition the available resources in the system across competing users, such that individual users are satisfied and certain system-wide objectives of interest are optimized? In this thesis, we deal with many such fundamental and practical resource allocation problems that arise in networked and distributed environments. We invoke two sophisticated paradigms -- linear programming and probabilistic methods -- and develop provably-good approximation algorithms for a diverse collection of applications. Our main contributions are as follows. Assignment problems: An assignment problem involves a collection of objects and locations, and a load value associated with each object-location pair. Our goal is to assign the objects to locations while minimizing various cost functions of the assignment. This setting models many applications in manufacturing, parallel processing, distributed storage, and wireless networks. We present a single algorithm for assignment which generalizes many classical assignment schemes known in the literature. Our scheme is derived through a fusion of linear algebra and randomization. In conjunction with other ideas, it leads to novel guarantees for multi-criteria parallel scheduling, broadcast scheduling, and social network modeling. Precedence constrained scheduling: We consider two precedence constrained scheduling problems, namely sweep scheduling and tree scheduling, which are inspired by emerging applications in high performance computing. Through a careful use of randomization, we devise the first approximation algorithms for these problems with near-optimal performance guarantees. Wireless communication: Wireless networks are prone to interference. This prohibits proximate network nodes from transmitting simultaneously, and introduces fundamental challenges in the design of wireless communication protocols. We develop fresh geometric insights for characterizing wireless interference. We combine our geometric analysis with linear programming and randomization, to derive near-optimal algorithms for latency minimization and throughput capacity estimation in wireless networks. In summary, the innovative use of linear programming and probabilistic techniques for resource allocation, and the novel ways of connecting them with application-specific ideas is the pivotal theme and the focal point of this thesis

    Critical phenomena in complex networks

    Full text link
    The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, researchers have made important steps toward understanding the qualitatively new critical phenomena in complex networks. We review the results, concepts, and methods of this rapidly developing field. Here we mostly consider two closely related classes of these critical phenomena, namely structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. We also discuss systems where a network and interacting agents on it influence each other. We overview a wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks. We also discuss strong finite size effects in these systems and highlight open problems and perspectives.Comment: Review article, 79 pages, 43 figures, 1 table, 508 references, extende

    A resource allocation mechanism based on cost function synthesis in complex systems

    Get PDF
    While the management of resources in computer systems can greatly impact the usefulness and integrity of the system, finding an optimal solution to the management problem is unfortunately NP hard. Adding to the complexity, today\u27s \u27modern\u27 systems - such as in multimedia, medical, and military systems - may be, and often are, comprised of interacting real and non-real-time components. In addition, these systems can be driven by a host of non-functional objectives – often differing not only in nature, importance, and form, but also in dimensional units and range, and themselves interacting in complex ways. We refer to systems exhibiting such characteristics as Complex Systems (CS). We present a method for handling the multiple non-functional system objectives in CS, by addressing decomposition, quantification, and evaluation issues. Our method will result in better allocations, improve objective satisfaction, improve the overall performance of the system, and reduce cost -in a global sense. Moreover, we consider the problem of formulating the cost of an allocation driven by system objectives. We start by discussing issues and relationships among global objectives, their decomposition, and cost functions for evaluation of system objective. Then, as an example of objective and cost function development, we introduce the concept of deadline balancing. Next, we proceed by proving the existence of combining models and their underlying conditions. Then, we describe a hierarchical model for system objective function synthesis. This synthesis is performed solely for the purpose of measuring the level of objective satisfaction in a proposed hardware to software allocation, not for design of individual software modules. Then, Examples are given to show how the model applies to actual multi-objective problems. In addition the concept of deadline balancing is extended to a new scheduling concept, namely Inter-Completion-Time Scheduling (ICTS. Finally, experiments based on simulation have been conducted to capture various properties of the synthesis approach as well as ICTS. A prototype implementation of the cost functions synthesis and evaluation environment is described, highlighting the applicability and usefulness of the synthesis in realistic applications

    Simplexity: A Hybrid Framework for Managing System Complexity

    Get PDF
    Knowledge management, management of mission critical systems, and complexity management rely on a triangular support connection. Knowledge management provides ways of creating, corroborating, collecting, combining, storing, transferring, and sharing the know-why and know-how for reactively and proactively handling the challenges of mission critical systems. Complexity management, operating on “complexity” as an umbrella term for size, mass, diversity, ambiguity, fuzziness, randomness, risk, change, chaos, instability, and disruption, delivers support to both knowledge and systems management: on the one hand, support for dealing with the complexity of managing knowledge, i.e., furnishing criteria for a common and operationalized terminology, for dealing with mediating and moderating concepts, paradoxes, and controversial validity, and, on the other hand, support for systems managers coping with risks, lack of transparence, ambiguity, fuzziness, pooled and reciprocal interdependencies (e.g., for attaining interoperability), instability (e.g., downtime, oscillations, disruption), and even disasters and catastrophes. This support results from the evident intersection of complexity management and systems management, e.g., in the shape of complex adaptive systems, deploying slack, establishing security standards, and utilizing hybrid concepts (e.g., hybrid clouds, hybrid procedures for project management). The complexity-focused manager of mission critical systems should deploy an ambidextrous strategy of both reducing complexity, e.g., in terms of avoiding risks, and of establishing a potential to handle complexity, i.e., investing in high availability, business continuity, slack, optimal coupling, characteristics of high reliability organizations, and agile systems. This complexity-focused hybrid approach is labeled “simplexity.” It constitutes a blend of complexity reduction and complexity augmentation, relying on the generic logic of hybrids: the strengths of complexity reduction are capable of compensating the weaknesses of complexity augmentation and vice versa. The deficiencies of prevalent simplexity models signal that this blended approach requires a sophisticated architecture. In order to provide a sound base for coping with the meta-complexity of both complexity and its management, this architecture comprises interconnected components, domains, and dimensions as building blocks of simplexity as well as paradigms, patterns, and parameters for managing simplexity. The need for a balanced paradigm for complexity management, capable of overcoming not only the prevalent bias of complexity reduction but also weaknesses of prevalent concepts of simplexity, serves as the starting point of the argumentation in this chapter. To provide a practical guideline to meet this demand, an innovative model of simplexity is conceived. This model creates awareness for differentiating components, dimensions, and domains of complexity management as well as for various species of interconnectedness, such as the aligned upsizing and downsizing of capacities, the relevance of diversity management (e.g., in terms of deviations and errors), and the scope of risk management instruments. Strategies (e.g., heuristics, step-by-step procedures) and tools for managing simplexity-guided projects are outlined

    Sequencing and scheduling : algorithms and complexity

    Get PDF

    Robust and secure resource management for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.Modern vehicles are examples of complex cyber-physical systems with tens to hundreds of interconnected Electronic Control Units (ECUs) that manage various vehicular subsystems. With the shift towards autonomous driving, emerging vehicles are being characterized by an increase in the number of hardware ECUs, greater complexity of applications (software), and more sophisticated in-vehicle networks. These advances have resulted in numerous challenges that impact the reliability, security, and real-time performance of these emerging automotive systems. Some of the challenges include coping with computation and communication uncertainties (e.g., jitter), developing robust control software, detecting cyber-attacks, ensuring data integrity, and enabling confidentiality during communication. However, solutions to overcome these challenges incur additional overhead, which can catastrophically delay the execution of real-time automotive tasks and message transfers. Hence, there is a need for a holistic approach to a system-level solution for resource management in automotive cyber-physical systems that enables robust and secure automotive system design while satisfying a diverse set of system-wide constraints. ECUs in vehicles today run a variety of automotive applications ranging from simple vehicle window control to highly complex Advanced Driver Assistance System (ADAS) applications. The aggressive attempts of automakers to make vehicles fully autonomous have increased the complexity and data rate requirements of applications and further led to the adoption of advanced artificial intelligence (AI) based techniques for improved perception and control. Additionally, modern vehicles are becoming increasingly connected with various external systems to realize more robust vehicle autonomy. These paradigm shifts have resulted in significant overheads in resource constrained ECUs and increased the complexity of the overall automotive system (including heterogeneous ECUs, network architectures, communication protocols, and applications), which has severe performance and safety implications on modern vehicles. The increased complexity of automotive systems introduces several computation and communication uncertainties in automotive subsystems that can cause delays in applications and messages, resulting in missed real-time deadlines. Missing deadlines for safety-critical automotive applications can be catastrophic, and this problem will be further aggravated in the case of future autonomous vehicles. Additionally, due to the harsh operating conditions (such as high temperatures, vibrations, and electromagnetic interference (EMI)) of automotive embedded systems, there is a significant risk to the integrity of the data that is exchanged between ECUs which can lead to faulty vehicle control. These challenges demand a more reliable design of automotive systems that is resilient to uncertainties and supports data integrity goals. Additionally, the increased connectivity of modern vehicles has made them highly vulnerable to various kinds of sophisticated security attacks. Hence, it is also vital to ensure the security of automotive systems, and it will become crucial as connected and autonomous vehicles become more ubiquitous. However, imposing security mechanisms on the resource constrained automotive systems can result in additional computation and communication overhead, potentially leading to further missed deadlines. Therefore, it is crucial to design techniques that incur very minimal overhead (lightweight) when trying to achieve the above-mentioned goals and ensure the real-time performance of the system. We address these issues by designing a holistic resource management framework called ROSETTA that enables robust and secure automotive cyber-physical system design while satisfying a diverse set of constraints related to reliability, security, real-time performance, and energy consumption. To achieve reliability goals, we have developed several techniques for reliability-aware scheduling and multi-level monitoring of signal integrity. To achieve security objectives, we have proposed a lightweight security framework that provides confidentiality and authenticity while meeting both security and real-time constraints. We have also introduced multiple deep learning based intrusion detection systems (IDS) to monitor and detect cyber-attacks in the in-vehicle network. Lastly, we have introduced novel techniques for jitter management and security management and deployed lightweight IDSs on resource constrained automotive ECUs while ensuring the real-time performance of the automotive systems

    Sequencing by enumerative methods

    Get PDF
    • …
    corecore