95 research outputs found

    Knowledge Management and the Contextualisation of Intellectual Property Rights in Innovation Systems

    Get PDF
    Support for this research was provided by Genome Canada through the Ontario Genomics Institute and Genome Alberta © David Castle et al 2010.Peer reviewedPublisher PD

    Numerical methods for shape optimization of photonic nanostructures

    Get PDF

    AN EXAMINATION OF THE PROCESS OF FORGIVENESS AND THE RELATIONSHIP AMONG STATE FORGIVENESS, SELF-COMPASSION, AND PSYCHOLOGICAL WELL-BEING EXPERIENCED BY BUDDHISTS IN THE UNITED STATES

    Get PDF
    The purpose of this study was to investigate the process of forgiveness and the relationship among state forgiveness, self-compassion, and psychological well-being experienced by Buddhists in the United States. An integral feminist framework was developed for this mixed-method study. For the quantitative component of this study, a convenience sample of 112 adults completed an online survey. Multiple regression analysis was performed to examine: (a) the impact of gender, age, and the years spent in Buddhist practice on state forgiveness and self-compassion; (b) the outcome of psychological well-being in relation to state forgiveness and self-compassion; and (c) self-compassion as a mediator for the relationship between state forgiveness and psychological well-being. Quantitative results indicated: (a) state forgiveness positively predicted psychological well-being; (b) the years spent in Buddhist practice positively predicted self-compassion; (c) self-compassion positively predicted psychological well-being; and (d) self-compassion partially mediated the relationship between state forgiveness and psychological well-being. Age did not predict any of the three primary variables. Gender did not predict state forgiveness. For the qualitative component of this study, this researcher purposefully selected four adults from a local Buddhist community in central Kentucky and conducted two in-depth interviews to explore their subjective experiences of forgiveness within their own contexts. A holistic-content narrative analysis revealed unique features of each interviewee’s forgiveness process interwoven with the socio-cultural, family and relational contexts. From a phenomenological analysis, common themes and elements of the interviewees’ forgiveness processes emerged. Qualitative findings corresponded to the quantitative results concerning state forgiveness as a route to psychological well-being, the positive relationship between Buddhist practice and compassion, and the role of self-compassion in the relationship between state forgiveness and psychological well-being. Qualitative findings also suggested the following. First, two-way compassion toward self and the offender was a facilitating factor for forgiveness that may be unique to Buddhists. Second, one’s actual experience of forgiveness may encompass not only cognitive, affective, and behavioral changes, but also transformation of self and perspective on meaning and purpose in life. Third, Enright and his colleagues’ (1998) stage and process models of forgiveness were useful to understand Buddhists’ experiences and processes of forgiveness

    Study and development of a reliable fiducials-based localization system for multicopter UAVs flying indoor

    Get PDF
    openThe recent evolution of technology in automation, agriculture, IoT, and aerospace fields has created a growing demand for mobile robots capable of autonomous operation and movement to accomplish various tasks. Aerial platforms are expected to play a central role in the future due to their versatility and swift intervention capabilities. However, the effective utilization of these platforms faces a significant challenge due to localization, which is a vital aspect for their interaction with the surrounding environment. While GNSS localization systems have established themselves as reliable solutions for open-space scenarios, the same approach is not viable for indoor settings, where localization remains an open problem as it is witnessed by the lack of extensive literature on the topic. In this thesis, we address this challenge by proposing a dependable solution for small multi-rotor UAVs using a Visual Inertial Odometry localization system. Our KF-based localization system reconstructs the pose by fusing data from onboard sensors. The primary source of information stems from the recognition of AprilTags fiducial markers, strategically placed in known positions to form a “map”. Building upon prior research and thesis work conducted at our university, we extend and enhance this system. We begin with a concise introduction, followed by a justification of our chosen strategies based on the current state of the art. We provide an overview of the key theoretical, mathematical, and technical aspects that support our work. These concepts are fundamental to the design of innovative strategies that address challenges such as data fusion from different AprilTag recognition and the elimination of misleading measurements. To validate our algorithms and their implementation, we conduct experimental tests using two distinct platforms by using localization accuracy and computational complexity as performance indices to demonstrate the practical viability of our proposed system. By tackling the critical issue of indoor localization for aerial platforms, this thesis tries to give some contribution to the advancement of robotics technology, opening avenues for enhanced autonomy and efficiency across various domains.The recent evolution of technology in automation, agriculture, IoT, and aerospace fields has created a growing demand for mobile robots capable of autonomous operation and movement to accomplish various tasks. Aerial platforms are expected to play a central role in the future due to their versatility and swift intervention capabilities. However, the effective utilization of these platforms faces a significant challenge due to localization, which is a vital aspect for their interaction with the surrounding environment. While GNSS localization systems have established themselves as reliable solutions for open-space scenarios, the same approach is not viable for indoor settings, where localization remains an open problem as it is witnessed by the lack of extensive literature on the topic. In this thesis, we address this challenge by proposing a dependable solution for small multi-rotor UAVs using a Visual Inertial Odometry localization system. Our KF-based localization system reconstructs the pose by fusing data from onboard sensors. The primary source of information stems from the recognition of AprilTags fiducial markers, strategically placed in known positions to form a “map”. Building upon prior research and thesis work conducted at our university, we extend and enhance this system. We begin with a concise introduction, followed by a justification of our chosen strategies based on the current state of the art. We provide an overview of the key theoretical, mathematical, and technical aspects that support our work. These concepts are fundamental to the design of innovative strategies that address challenges such as data fusion from different AprilTag recognition and the elimination of misleading measurements. To validate our algorithms and their implementation, we conduct experimental tests using two distinct platforms by using localization accuracy and computational complexity as performance indices to demonstrate the practical viability of our proposed system. By tackling the critical issue of indoor localization for aerial platforms, this thesis tries to give some contribution to the advancement of robotics technology, opening avenues for enhanced autonomy and efficiency across various domains

    A triage approach to streamline environmental footprinting : a case study for liquid crystal displays

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 64-69).Quantitative environmental performance evaluation methods are desired given the growing certification and labeling landscape for consumer goods. Challenges associated with existing methods, such as life cycle assessment (LCA), may be prohibitive for complex goods such as information technology (IT). Conventional LCA is resource-intensive and lacks harmonized guidance for incorporating uncertainty. Current methods to streamline LCA may amplify uncertainty, undermining robustness. Despite high uncertainty, effective and efficient streamlining approaches may be possible. A methodology is proposed to identify high-impact activities within the life cycle of a specific product class for a streamlined assessment with a high degree of inherent uncertainty. First, a screening assessment is performed using Monte Carlo simulations, applying existing activity (materials and processes), impact, and uncertainty data, to identify elements with the most leverage to reduce overall environmental impact uncertainty. This data triage is informed by sensitivity analysis parameters produced by the simulations. Targeted data collection is carried out for key activities until overall uncertainty is reduced to the point where a product classes' impact probability distribution is distinct from others within a specified error rate. In this thesis, we find that triage and prioritization are possible despite high uncertainty. The methodology was applied to the case study of liquid crystal display (LCD) classes, producing a clear hierarchy of data importance to reduce uncertainty of the overall impact result. Specific data collection was only required for a subset of processes and activities (22 out of about 50) to enable discrimination of LCDs with a low error rate (9%). Most of these priority activities relate to manufacturing and use phases. The number of priority activities targeted may be balanced with the level to which they are able to be specified. It was found that ostensible product attributes alone are insufficient to discriminate with low error, even at high levels of specificity. This quantitative streamlining method is ideal for complex products for which there is great uncertainty in data collection and modeling. This application of this method may inform early product design decisions and enable harmonization of standardization efforts.by Melissa Lee Zgola.S.M.in Technology and Polic

    Models for Flexible Supply Chain Network Design

    Full text link
    Arguably Supply Chain Management (SCM) is one of the central problems in Operations Research and Management Science (OR/MS). Supply Chain Network Design (SCND) is one of the most crucial strategic problems in the context of SCM. SCND involves decisions on the number, location, and capacity, of production/distribution facilities of a manufacturing company and/or its suppliers operating in an uncertain environment. Specifically, in the automotive industry, manufacturing companies constantly need to examine and improve their supply chain strategies due to uncertainty in the parameters that impact the design of supply chains. The rise of the Asian markets, introduction of new technologies (hybrid and electric cars), fluctuations in exchange rates, and volatile fuel costs are a few examples of these uncertainties. Therefore, our goal in this dissertation is to investigate the need for accurate quantitative decision support methods for decision makers and to show different applications of OR/MS models in the SCND realm. In the first technical chapter of the dissertation, we proposed a framework that enables the decision makers to systematically incorporate uncertainty in their designs, plan for many plausible future scenarios, and assess the quality of service and robustness of their decisions. Further, we discuss the details of the implementation of our framework for a case study in the automotive industry. Our analysis related to the uncertainty quantification, and network's design performance illustrates the benefits of using our framework in different settings of uncertainty. Although this chapter is focused on our case study in the automotive industry, it can be generalized to the SCND problem in any industry. We have outline the shortcomings of the current literature in incorporating the correlation among design parameters of the supply chains in the second technical chapter. In this chapter, we relax the traditional assumption of knowing the distribution of the uncertain parameters. We develop a methodology based on Distributionally Robust Optimization (DRO) with marginal uncertainty sets to incorporate the correlation among uncertain parameters into the designing process. Further, we propose a delayed generation constraint algorithm to solve the NP-hard correlated model in significantly less time than that required by commercial solvers. Further, we show that the price of ignoring this correlation in the parameters increases when we have less information about the uncertain parameters and that the correlated model gives higher profit when exchange rates are high compared to the stochastic model (with the independence assumption). We extended our models in previous chapters by presenting capacity options as a mechanism to hedge against uncertainty in the input parameters. The concept of capacity options similar to financial options constitute the right, but not the obligation, to buy more commodities from suppliers with a predetermined price, if necessary. In capital-intensive industries like the automotive industry, the lost capital investment for excess capacity and the opportunity costs of underutilized capacity have been important drivers for improving flexibility in supply contracts. Our proposed mechanism for high tooling cost parts decreases the total costs of the SCND and creates flexibility within the structure of the designed SCNs. Moreover, we draw several insights from our numerical analyses and discuss the possibility of price negotiations between suppliers and manufacturers over the hedging fixed costs and variable costs. Overall, the findings from this dissertation contribute to improve the flexibility, reliability, and robustness of the SCNs for a wide-ranging set of industries.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145819/1/nsalehi_1.pd

    The technical efficiency of SACU Ports: a data envelopment analysis approach

    Get PDF
    There ever growing international trade and increasing congestion of ports led to an increased focus attention on technical efficiency. Seaports are a central and necessary component in facilitating international trade. Yet, there is only limited comprehensive information available on the technical efficiency of African ports. The study investigated the technical efficiency of the SACU ports during the period 2014-2019 using DEA model. The DEA model is effective in resolving the measurement of port efficiency since the calculations are nonparametric and do not need definition or knowledge of a priori weights for the inputs or outputs, as is necessary for estimate of efficiency using production functions. To identify the roots of the technical inefficiency of the SACU ports, the study subdivided technical efficiency into pure technical and scale efficiency. The model used cargo handled, container throughput, ship calls as output variables. Whilst, quay cranes, number of tugboats, draft, quay length and number of quays were used as input variables. The study used the scores of DEA-BCC model as explanatory variables in Tobit model. The results showed that quay cranes and quay length are the cause of technical inefficiencies in the ports.Thesis (MCom (Economics)) -- Faculty of Business and Economic Sciences, 202

    The technical efficiency of SACU Ports: a data envelopment analysis approach

    Get PDF
    There ever growing international trade and increasing congestion of ports led to an increased focus attention on technical efficiency. Seaports are a central and necessary component in facilitating international trade. Yet, there is only limited comprehensive information available on the technical efficiency of African ports. The study investigated the technical efficiency of the SACU ports during the period 2014-2019 using DEA model. The DEA model is effective in resolving the measurement of port efficiency since the calculations are nonparametric and do not need definition or knowledge of a priori weights for the inputs or outputs, as is necessary for estimate of efficiency using production functions. To identify the roots of the technical inefficiency of the SACU ports, the study subdivided technical efficiency into pure technical and scale efficiency. The model used cargo handled, container throughput, ship calls as output variables. Whilst, quay cranes, number of tugboats, draft, quay length and number of quays were used as input variables. The study used the scores of DEA-BCC model as explanatory variables in Tobit model. The results showed that quay cranes and quay length are the cause of technical inefficiencies in the ports.Thesis (MCom (Economics)) -- Faculty of Business and Economic Sciences, 202

    Optimization of facility layout

    Get PDF
    Computer-aided layout technique, which appears to be the best approach to solving complex layout problems, is not commonly used in practice. One of the important reasons may be the generation of unrealistic layouts which results from ignoring the important practical constraints and objectives involved in layout problems. As one possible solution to this problem, a human planner can develop layout using a computer routine with those constraints and objectives in mind. However, the development of a heuristic procedure which incorporates human-like layout processes into a computer program could be a better solution;This dissertation provides the means of a realistic or a close to realistic layout development using important practical objectives and constraints involved in facility layout. Instead of ignoring those factors due to the difficulties of implementing them into mathematical statements, using them in the process of layout development will be helpful to reach an optimum or a near-optimum solution;An experimental system, FLUKES, has been constructed for testing purposes. This system develops layouts which include the practical factors involved in layout problems. These factors include architectural limitations, health/safety, user preferences, utilities, department shapes, future expansion plans, and energy savings as well as material handling costs. FLUKES uses these factors not only for the evaluation of a layout, but also for the search for a solution

    On the Application of Identity-Based Cryptography in Grid Security

    Get PDF
    This thesis examines the application of identity-based cryptography (IBC) in designing security infrastructures for grid applications. In this thesis, we propose a fully identity-based key infrastructure for grid (IKIG). Our proposal exploits some interesting properties of hierarchical identity-based cryptography (HIBC) to replicate security services provided by the grid security infrastructure (GSI) in the Globus Toolkit. The GSI is based on public key infrastructure (PKI) that supports standard X.509 certificates and proxy certificates. Since our proposal is certificate-free and has small key sizes, it offers a more lightweight approach to key management than the GSI. We also develop a one-pass delegation protocol that makes use of HIBC properties. This combination of lightweight key management and efficient delegation protocol has better scalability than the existing PKI-based approach to grid security. Despite the advantages that IKIG offers, key escrow remains an issue which may not be desirable for certain grid applications. Therefore, we present an alternative identity-based approach called dynamic key infrastructure for grid (DKIG). Our DKIG proposal combines both identity-based techniques and the conventional PKI approach. In this hybrid setting, each user publishes a fixed parameter set through a standard X.509 certificate. Although X.509 certificates are involved in DKIG, it is still more lightweight than the GSI as it enables the derivation of both long-term and proxy credentials on-the-fly based only on a fixed certificate. We also revisit the notion of secret public keys which was originally used as a cryptographic technique for designing secure password-based authenticated key establishment protocols. We introduce new password-based protocols using identity-based secret public keys. Our identity-based techniques can be integrated naturally with the standard TLS handshake protocol. We then discuss how this TLS-like identity-based secret public key protocol can be applied to securing interactions between users and credential storage systems, such as MyProxy, within grid environments
    • 

    corecore