75 research outputs found

    Unveiling optimal operating conditions for an epoxy polymerization process using multi-objective evolutionary computation

    Get PDF
    The optimization of the epoxy polymerization process involves a number of conflicting objectives and more than twenty decision parameters. In this paper, the problem is treated truly as a multi-objective optimization problem and near-Pareto-optimal solutions corresponding to two and three objectives are found using the elitist non-dominated sorting GA or NSGA-II. Objectives, such as the number average molecular weight, polydispersity index and reaction time, are considered. The first two objectives are related to the properties of a polymer, whereas the third objective is related to productivity of the polymerization process. The decision variables are discrete addition quantities of various reactants e.g. the amount of addition for bisphenol-A (a monomer), sodium hydroxide and epichlorohydrin at different time steps, whereas the satisfaction of all species balance equations is treated as constraints. This study brings out a salient aspect of using an evolutionary approach to multi-objective problem solving. Important and useful patterns of addition of reactants are unveiled for different optimal trade-off solutions. The systematic approach of multi-stage optimization adopted here for finding optimal operating conditions for the epoxy polymerization process should further such studies on other chemical process and real-world optimization problems

    Isotropic Robertson-Walker model universe with dynamical cosmological parameter Λ in Brans-Dicke Theory of Gravitation

    Get PDF
    This paper discusses about Robertson-Walker space-time with quadratic equation of state and dynamical cosmological parameter Λ . Some exact solutions of Einstein’s field equations for three cases have been obtained. Physical behaviors of the models are discussed in detail

    Measuring the robustness of resource allocations for distributed domputer systems in a stochastic dynamic environment

    Get PDF
    Heterogeneous distributed computing systems often must function in an environment where system parameters are subject to variations during operation. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. We present a methodology for quantifying the robustness of resource allocations in a dynamic environment where task execution times vary within predictable ranges and tasks arrive randomly. The methodology is evaluated through measuring the robustness of three different resource allocation heuristics within the context of the stochastically modeled dynamic environment. A Bayesian regression model is fit to the combined results of the three heuristics to demonstrate the correlation between the stochastic robustness metric and the presented performance metric. The correlation results demonstrated the significant potential of the stochastic robustness metric to predict the relative performance of the three heuristics given a common objective function

    Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges

    Full text link
    [EN] If last decade viewed computational services as a utility then surely this decade has transformed computation into a commodity. Computation is now progressively integrated into the physical networks in a seamless way that enables cyber-physical systems (CPS) and the Internet of Things (IoT) meet their latency requirements. Similar to the concept of Âżplatform as a serviceÂż or Âżsoftware as a serviceÂż, both cloudlets and fog computing have found their own use cases. Edge devices (that we call end or user devices for disambiguation) play the role of personal computers, dedicated to a user and to a set of correlated applications. In this new scenario, the boundaries between the network node, the sensor, and the actuator are blurring, driven primarily by the computation power of IoT nodes like single board computers and the smartphones. The bigger data generated in this type of networks needs clever, scalable, and possibly decentralized computing solutions that can scale independently as required. Any node can be seen as part of a graph, with the capacity to serve as a computing or network router node, or both. Complex applications can possibly be distributed over this graph or network of nodes to improve the overall performance like the amount of data processed over time. In this paper, we identify this new computing paradigm that we call Social Dispersed Computing, analyzing key themes in it that includes a new outlook on its relation to agent based applications. We architect this new paradigm by providing supportive application examples that include next generation electrical energy distribution networks, next generation mobility services for transportation, and applications for distributed analysis and identification of non-recurring traffic congestion in cities. The paper analyzes the existing computing paradigms (e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity of their definitions; and analyzes and discusses the relevant foundational software technologies, the remaining challenges, and research opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029

    Comparison of the physical and geotechnical properties of gas-hydrate-bearing sediments from offshore India and other gas-hydrate-reservoir systems

    Get PDF
    This paper is not subject to U.S. copyright. The definitive version was published in Marine and Petroleum Geology 58A (2014): 139-167, doi:10.1016/j.marpetgeo.2014.07.024.The sediment characteristics of hydrate-bearing reservoirs profoundly affect the formation, distribution, and morphology of gas hydrate. The presence and type of gas, porewater chemistry, fluid migration, and subbottom temperature may govern the hydrate formation process, but it is the host sediment that commonly dictates final hydrate habit, and whether hydrate may be economically developed. In this paper, the physical properties of hydrate-bearing regions offshore eastern India (Krishna-Godavari and Mahanadi Basins) and the Andaman Islands, determined from Expedition NGHP-01 cores, are compared to each other, well logs, and published results of other hydrate reservoirs. Properties from the hydrate-free Kerala-Konkan basin off the west coast of India are also presented. Coarser-grained reservoirs (permafrost-related and marine) may contain high gas-hydrate-pore saturations, while finer-grained reservoirs may contain low-saturation disseminated or more complex gas-hydrates, including nodules, layers, and high-angle planar and rotational veins. However, even in these fine-grained sediments, gas hydrate preferentially forms in coarser sediment or fractures, when present. The presence of hydrate in conjunction with other geologic processes may be responsible for sediment porosity being nearly uniform for almost 500 m off the Andaman Islands. Properties of individual NGHP-01 wells and regional trends are discussed in detail. However, comparison of marine and permafrost-related Arctic reservoirs provides insight into the inter-relationships and common traits between physical properties and the morphology of gas-hydrate reservoirs regardless of location. Extrapolation of properties from one location to another also enhances our understanding of gas-hydrate reservoir systems. Grain size and porosity effects on permeability are critical, both locally to trap gas and regionally to provide fluid flow to hydrate reservoirs. Index properties corroborate more advanced consolidation and triaxial strength test results and can be used for predicting behavior in other NGHP-01 regions. Pseudo-overconsolidation is present near the seafloor and is underlain by underconsolidation at depth at some NGHP-01 locations.This work was supported by the Coastal and Marine Geology, and Energy Programs of the U.S. Geological Survey. Partial support for this research was provided by Interagency Agreement DE-FE0002911 between the USGS Gas Hydrates Project and the U.S. Department of Energy's Methane Hydrates R&D Program

    Differentiating ATAAD from ST-elevation myocardial infarction

    No full text
    Due to the rarity of the incidence and elusive nature of the disease, the diagnosis of acute non-traumatic Type A aortic dissection (ATAAD) is suspected at initial evaluation in fewer than half of the patients ultimately diagnosed with aortic dissection (AD) in the Emergency Department (ED).Additionally, the clinical presentation of ATAAD and ST- elevation myocardial infarction (STEMI) may be quite similar, including changes in the electrocardiogram (ECG), and chest radiograph may be normal in up to 20% of the cases. It is imperative that clinicians working in the acute care setting remain vigilant and consider ATAAD a top differential diagnosis in adult patients with chest pain. This is because the therapeutic interventions for STEMI is contraindicated in ATAAD.The high- risk scores established by the American Heart Association (AHA) associated with ATAAD based on underlined comorbidities, clinical symptoms, physical examination and diagnostic findings, along with proper clinical and diagnostic workup will help to discriminate ATAAD from a STEMI, voiding detrimental outcomes

    Multi-criteria analysis in modern information management

    Get PDF
    Department Head: L. Darrell Whitley.Includes bibliographical references.The past few years have witnessed an overwhelming amount of research in the field of information security and privacy. An encouraging outcome of this research is the vast accumulation of theoretical models that help to capture the various threats that persistently hinder the best possible usage of today's powerful communication infrastructure. While theoretical models are essential to understanding the impact of any breakdown in the infrastructure, they are of limited application if the underlying business centric view is ignored. Information management in this context is the strategic management of the infrastructure, incorporating the knowledge about causes and consequences to arrive at the right balance between risk and profit. Modern information management systems are home to a vast repository of sensitive personal information. While these systems depend on quality data to boost the Quality of Service (QoS), they also run the risk of violating privacy regulations. The presence of network vulnerabilities also weaken these systems since security policies cannot always be enforced to prevent all forms of exploitation. This problem is more strongly grounded in the insufficient availability of resources, rather than the inability to predict zero-day attacks. System resources also impact the availability of access to information, which in itself is becoming more and more ubiquitous day by day. Information access times in such ubiquitous environments must be maintained within a specified QoS level. In short, modern information management must consider the mutual interactions between risks, resources and services to achieve wide scale acceptance. This dissertation explores these problems in the context of three important domains, namely disclosure control, security risk management and wireless data broadcasting. Research in these domains has been put together under the umbrella of multi-criteria decision making to signify that "business survival" is an equally important factor to consider while analyzing risks and providing solutions for their resolution. We emphasize that businesses are always bound by constraints in their effort to mitigate risks and therefore benefit the most from a framework that allows the exploration of solutions that abide by the constraints. Towards this end, we revisit the optimization problems being solved in these domains and argue that they oversee the underlying cost-benefit relationship. Our approach in this work is motivated by the inherent multi-objective nature of the problems. We propose formulations that help expose the cost-benefit relationship across the different objectives that must be met in these problems. Such an analysis provides a decision maker with the necessary information to make an informed decision on the impact of choosing a control measure over the business goals of an organization. The theories and tools necessary to perform this analysis are introduced to the community
    • …
    corecore