352 research outputs found

    Categorization And Visualization Of Parallel Programming Systems

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2005Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2005Yükesek kazanımlı programlama olarak da bilinen paralel programlama, bir problemi daha hızlı çözmek için aynı anda birden çok işlemci kullanılmasına denir. Günümüzde, ağır işlemler içeren birçok problem paralel olarak uygulanmaya çalışılmaktadır, buna örnek olarak nehir sularının simüle edilmesi, fizik veya kimya problemleri, astrolojik simülasyonlar verilebilir. Bu tezin amacı, bilimsel hesaplama veya mühendislik amaçlı kullanılan yüksek kazanımlı yazılımları tartışmaktır. Paralel programlama sistemleri ile kastedilen kütüphaneler, diller, derleyiciler, derleyici yönlendiricileri veya bunun dışında kalan, programcının paralel algoritmasını ifade edebileceği yapılardır. Yükesek kazanımlı program tasarımı için programcının dikkat etmesi gereken iki önemli nokta vardır: problemi iyi kavrayıp uygun bir çözüm önermek, doğru sisteme karar verebilmek. Doğru karar verebilmek için kullanıcının sistemler hakkında oldukça iyi bilgiye sahip olması gerekir. Bazen, birden çok yazılım ve donanımı bir arada kullanmak da gerekebilir. Bu tezde var olan paralel programlama sistemleri tanımlanır ve sınıflandırılır, bunun için güncel bildiriler esas alınmıştır. Özellikle algoritmik taslaklar ve fonsiyonel paralel programlama üzerinde durulmuştur.Ayrica güncel bilgileri depolamak ve bir kaynak yaratmak için wiki temelli bir web kaynağı oluşturulmuştur. Sistemlerin grafik gösterimini sağlayıp daha anlaşılır bir sınıflandırma yapabilmek için yeni bir sözdizimi tasarlanıp dinamik ağ çizebilecek webdot aracı ile bir araya getirilerek sistemleri temsil edecek ağı çizecek araç geliştirilmiştir. Bu sözdiziminin öğrenilmesi ve kullanılması son derece kolaydır. Son olarak iki temel paralel programlama tipi, paylaşılan bellek ve mesajlaşma, iki farklı tipte algoritma kullanılarak karşılaştırılmıştır. Programlar OpenMP ve MPI ile gerçeklenmiştir, farklı paralel makinelerde koşturulup sonuçları karşılaştırılmıştır. Paralel makineler için Almanya nın Aachen Üniversitesi nin SMP ağı ve Ulakbim in dağıtık bellekli paralel makineleri kullanılmıştır.Parallel computing, also called high-performance computing, refers to solving problems faster by using multiple processors simultaneously. Nowadays, almost every computationally-intensive problem that one could imagine is tried to be implemented in parallel. This thesis is aimed at discussing high-performance software for scientific or engineering applications. The term parallel programming systems here means libraries, languages, compiler directives or other means through which a programmer can express a parallel algorithm. To design high performance programs, there are two keys for the programmer: to understand the problem and find a solution for parallelization, and to decide on the right system for the implementation, which requires a good knowledge about existing parallel programming systems. The programmer, after having understood the problem, has to choose between many systems, some of which are closely related, whereas others have big differences. This thesis describes and classifies existing parallel programming systems, thus bringing existing surveys up to date. It describes a wiki-based web portal for collecting information about most recent systems, which has been developed as part of the thesis. A special syntax and a visualization tool has been developed. This syntax and tool allow users to have their own categorization scheme. Fourth, it compares two major programming styles message passing and shared memory with two different algorithms in order show performance differences of these styles. Algorithms are implemented in OpenMP and MPI, performance of both programs are measured on the SMP Cluster of Aachen University, Germany and on the Beowulf Cluster of Ulakbim, Ankara.Yüksek LisansM.Sc

    A review of discrete-time optimization models for tactical production planning

    Full text link
    This is an Accepted Manuscript of an article published in International Journal of Production Research on 27 Mar 2014, available online: http://doi.org/10.1080/00207543.2014.899721[EN] This study presents a review of optimization models for tactical production planning. The objective of this research is to identify streams and future research directions in this field based on the different classification criteria proposed. The major findings indicate that: (1) the most popular production-planning area is master production scheduling with a big-bucket time-type period; (2) most of the considered limited resources correspond to productive resources and, to a lesser extent, to inventory capacities; (3) the consideration of backlogs, set-up times, parallel machines, overtime capacities and network-type multisite configuration stand out in terms of extensions; (4) the most widely used modelling approach is linear/integer/mixed integer linear programming solved with exact algorithms, such as branch-and-bound, in commercial MIP solvers; (5) CPLEX, C and its variants and Lindo/Lingo are the most popular development tools among solvers, programming languages and modelling languages, respectively; (6) most works perform numerical experiments with random created instances, while a small number of works were validated by real-world data from industrial firms, of which the most popular are sawmills, wood and furniture, automobile and semiconductors and electronic devices.This study has been funded by the Universitat Politècnica de València projects: ‘Material Requirement Planning Fourth Generation (MRPIV)’ (Ref. PAID-05-12) and ‘Quantitative Models for the Design of Socially Responsible Supply Chains under Uncertainty Conditions. Application of Solution Strategies based on Hybrid Metaheuristics’ (PAID-06-12).Díaz-Madroñero Boluda, FM.; Mula, J.; Peidro Payá, D. (2014). A review of discrete-time optimization models for tactical production planning. International Journal of Production Research. 52(17):5171-5205. doi:10.1080/00207543.2014.899721S51715205521

    A Hybrid Fuzzy Approach to Bullwhip Effect in Supply Chain Networks

    Get PDF

    Prenatal Drug Exposure and its Effects on Fetal Development: Clinical and Health Education Implications

    Get PDF
    Prenatal drug exposure is a common clinical phenomenon in pregnancy cases. Pregnancy is a fragile period of time for both the mother and the fetus. Therefore, strict drug regulation is important to ensure the safety and wellbeing of the developing fetus. Certain drugs, once thought to be safe, have been found to have detrimental effects on the normal development of functioning organ systems in the fetus. Current research has identified drugs that when taken during pregnancy can result in the onset of fetal physical abnormalities, impaired brain development, and disrupted organogenesis and organ function. Thalidomide, losartan, opioids, alcohol, and caffeine are reviewed to identify the trends in the literature on prenatal drug exposure and the implications of drug usage during pregnancy. Although the Food & Drug Administration (FDA) are active in regulation, educational programs for pregnant/lactating women and epidemiological research on both prescribed and over-the-counter drugs on the fetus is necessary to preserve maternal health and, consequently, the health of the baby

    Women living with HIV, diabetes and/or hypertension multi-morbidity in Uganda: a qualitative exploration of experiences accessing an integrated care service

    Get PDF
    Purpose: Women experience a triple burden of ill-health spanning non-communicable diseases (NCDs), reproductive and maternal health conditions and human immunodeficiency virus (HIV) in sub-Saharan Africa. Whilst there is research on integrated service experiences of women living with HIV (WLHIV) and cancer, little is known regarding those of WLHIV, diabetes and/or hypertension when accessing integrated care. Our research responds to this gap. Design/methodology/approach: The INTE-AFRICA project conducted a pragmatic parallel arm cluster randomised trial to scale up and evaluate “one-stop” integrated care clinics for HIV-infection, diabetes and hypertension at selected primary care centres in Uganda. A qualitative process evaluation explored and documented patient experiences of integrated care for HIV, diabetes and/or hypertension. In-depth interviews were conducted using a phenomenological approach with six WLHIV with diabetes and/or hypertension accessing a “one stop” clinic. Thematic analysis of narratives revealed five themes: lay health knowledge and alternative medicine, community stigma, experiences of integrated care, navigating personal challenges and health service constraints. Findings: WLHIV described patient pathways navigating HIV and diabetes/hypertension, with caregiving responsibilities, poverty, travel time and cost and personal ill health impacting on their ability to adhere to multi-morbid integrated treatment. Health service barriers to optimal integrated care included unreliable drug supply for diabetes/hypertension and HIV linked stigma. Comprehensive integrated care is recommended to further consider gender sensitive aspects of care. Originality/value: This study whilst small scale, provides a unique insight into the lived experience of WLHIV navigating care for HIV and diabetes and/or hypertension, and how a “one stop” integrated care clinic can support them (and their children) in their treatment journeys

    Software Tools and Analysis Methods for the Use of Electromagnetic Articulography Data in Speech Research

    Get PDF
    Recent work with Electromagnetic Articulography (EMA) has shown it to be an excellent tool for characterizing speech kinematics. By tracking the position and orientation of sensors placed on the jaws, lips, teeth and tongue as they move in an electromagnetic field, information about movement and coordination of the articulators can be obtained with great time resolution. This technique has far-reaching applications for advancing fields related to speech articulation, including recognition, synthesis, motor learning, and clinical assessments. As more EMA data becomes widely available, a growing need exists for software that performs basic processing and analysis functions. The objective of this work is to create and demonstrate the use of new software tools that make full use of the information provided in EMA datasets, with a goal of maximizing the impact of EMA research. A new method for biteplate-correcting orientation data is presented, allowing orientation data to be used for articulatory analysis. Two examples of applications using orientation data are presented: a tool for jaw-angle measurement using a single EMA sensor, and a tongue interpolation tool based on three EMA sensors attached to the tongue. The results demonstrate that combined position and orientation data give a more complete picture of articulation than position data alone, and that orientation data should be incorporated in future work with EMA. A new standalone, GUI-based software tool is also presented for visualization of EMA data. It includes simultaneous real-time playback of kinematic and acoustic data, as well as basic analysis capabilities for both types of data. A comparison of the visualization tool to existing EMA software shows that it provides superior visualization and comparable analysis features to existing software. The tool will be included with the Marquette University EMA-MAE database to aid researchers working with this dataset

    Energy aware performance evaluation of WSNs

    Get PDF
    Distributed sensor networks have been discussed for more than 30 years, but the vision of Wireless Sensor Networks (WSNs) has been brought into reality only by the rapid advancements in the areas of sensor design, information technologies, and wireless networks that have paved the way for the proliferation of WSNs. The unique characteristics of sensor networks introduce new challenges, amongst which prolonging the sensor lifetime is the most important. Energy-efficient solutions are required for each aspect of WSN design to deliver the potential advantages of the WSN phenomenon, hence in both existing and future solutions for WSNs, energy efficiency is a grand challenge. The main contribution of this thesis is to present an approach considering the collaborative nature of WSNs and its correlation characteristics, providing a tool which considers issues from physical to application layer together as entities to enable the framework which facilitates the performance evaluation of WSNs. The simulation approach considered provides a clear separation of concerns amongst software architecture of the applications, the hardware configuration and the WSN deployment unlike the existing tools for evaluation. The reuse of models across projects and organizations is also promoted while realistic WSN lifetime estimations and performance evaluations are possible in attempts of improving performance and maximizing the lifetime of the network. In this study, simulations are carried out with careful assumptions for various layers taking into account the real time characteristics of WSN. The sensitivity of WSN systems are mainly due to their fragile nature when energy consumption is considered. The case studies presented demonstrate the importance of various parameters considered in this study. Simulation-based studies are presented, taking into account the realistic settings from each layer of the protocol stack. Physical environment is considered as well. The performance of the layered protocol stack in realistic settings reveals several important interactions between different layers. These interactions are especially important for the design of WSNs in terms of maximizing the lifetime of the network

    Report of the Hybrid Security Governance in Africa project terminal conference : 21-22 July 2017

    Get PDF
    The project aims to research the ‘complex amalgam of formal and informal, statutory and non-statutory actors and institutions’ which together constitute what is called ‘hybrid security orders.’ Socio-cultural factors that hinder justice in the formal sector are also present in the informal sector. Customary or traditional systems often serve as custodians of customs and traditions that ultimately privilege men and maintain the status quo. This paper provides a report on research presentations which cover a broad range of issues in security sector reform at the International Conference on Hybrid Security Governance in Africa

    Generating and auto-tuning parallel stencil codes

    Get PDF
    In this thesis, we present a software framework, Patus, which generates high performance stencil codes for different types of hardware platforms, including current multicore CPU and graphics processing unit architectures. The ultimate goals of the framework are productivity, portability (of both the code and performance), and achieving a high performance on the target platform. A stencil computation updates every grid point in a structured grid based on the values of its neighboring points. This class of computations occurs frequently in scientific and general purpose computing (e.g., in partial differential equation solvers or in image processing), justifying the focus on this kind of computation. The proposed key ingredients to achieve the goals of productivity, portability, and performance are domain specific languages (DSLs) and the auto-tuning methodology. The Patus stencil specification DSL allows the programmer to express a stencil computation in a concise way independently of hardware architecture-specific details. Thus, it increases the programmer productivity by disburdening her or him of low level programming model issues and of manually applying hardware platform-specific code optimization techniques. The use of domain specific languages also implies code reusability: once implemented, the same stencil specification can be reused on different hardware platforms, i.e., the specification code is portable across hardware architectures. Constructing the language to be geared towards a special purpose makes it amenable to more aggressive optimizations and therefore to potentially higher performance. Auto-tuning provides performance and performance portability by automated adaptation of implementation-specific parameters to the characteristics of the hardware on which the code will run. By automating the process of parameter tuning — which essentially amounts to solving an integer programming problem in which the objective function is the number representing the code's performance as a function of the parameter configuration, — the system can also be used more productively than if the programmer had to fine-tune the code manually. We show performance results for a variety of stencils, for which Patus was used to generate the corresponding implementations. The selection includes stencils taken from two real-world applications: a simulation of the temperature within the human body during hyperthermia cancer treatment and a seismic application. These examples demonstrate the framework's flexibility and ability to produce high performance code
    corecore