27 research outputs found

    Strategies to Maintain Profitability When Crude Oil Prices Fluctuate

    Get PDF
    Rapid and sustained fluctuations in the crude oil market have remained a threat to the financial performance of national and multinational oil and gas corporations. Such fluctuations can reduce the profitability of national and multinational oil and gas corporations. The purpose of this qualitative descriptive single case study was to explore strategies oil production leaders used to maintain profitability when crude oil prices fluctuate. The participants included 6 senior oil production leaders in a national oil corporation in Ghana who employed successful strategies to maintain profitability. Kraus and Litzenberger’s trade-off theory of capital structure served as the conceptual framework for the study. Data collection methods included semistructured interviews, company documents, direct observation, and a reflective journal. Based on the methodological triangulation and the use of thematic data analysis technique, 3 broad themes emerged: relating to enhancing operational efficiency through organizational restructuring and competitive oil price hedging, business portfolio diversification through effective asset management and innovative technologies, and optimization of capital structure through debt restructuring. Oil production leaders would have to embrace the growth of artificial intelligence and the Internet of Things to improve the efficiency of business operations and maintain profitability. Oil production leaders might apply these findings to enhance business continuity, avoid bankruptcy, and maintain profitability during oil price downturns. Maintaining profitability would help ensure employees’ job security and flow of income. Sustained income would benefit employees and their families and could have a positive social impact on employees’ local communities

    Special Libraries, Spring 1993

    Get PDF
    Volume 84, Issue 2https://scholarworks.sjsu.edu/sla_sl_1993/1001/thumbnail.jp

    The Importance of Communication Skills to Independent Crop Consultants

    Get PDF
    Independent crop consulting companies provide services to farmers by scouting (i.e., collecting field observations of plants and pests) and developing management recommendations for individual fields. In production agriculture, independent crop consultants (ICCs) are professionals who are independent of product sales. They are knowledgeable in many disciplines including plant pathology, entomology, weed science, plant science, economics, water management, and soil science. However, ICCs must also have extensive communication skills to communicate to their audience of field scout(s), farmers, industry professionals, and government officials. The goal of this document is to examine how ICCs use their communication skills and how they can refine and strengthen their communication skills. Communication is an important life skill, involving knowledge or information transfer to produce an outcome. Communication concepts and models can be applied to interpersonal communication between ICCs and their audience (Chapter 1). Communication between the field scout and ICC primarily occurs during the field training process for the scout. Educational methods of experiential learning and scaffolding can be applied to this field training process (Chapter 2). Interviews with farmers explored the motivations and values of farmers that aid the ICC in communicating management recommendations to farmers (Chapter 3). These interviews emphasized farmers have individual goals, motivations, values, and communication styles, in which an ICC must adapt to develop a trusting relationship. Independent crop consultants are also instrumental in the agricultural social system by bridging knowledge transfer between farmers, industry professionals, and government officials (Chapter 4). Advisor: Gary L. Hei

    Novel parallel approaches to efficiently solve spatial problems on heterogeneous CPU-GPU systems

    Get PDF
    Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas.In recent years, approaches that seek to extract valuable information from large datasets have become particularly relevant in today's society. In this category, we can highlight those problems that comprise data analysis distributed across two-dimensional scenarios called spatial problems. These usually involve processing (i) a series of features distributed across a given plane or (ii) a matrix of values where each cell corresponds to a point on the plane. Therefore, we can see the open-ended and complex nature of spatial problems, but it also leaves room for imagination to be applied in the search for new solutions. One of the main complications we encounter when dealing with spatial problems is that they are very computationally intensive, typically taking a long time to produce the desired result. This drawback is also an opportunity to use heterogeneous systems to address spatial problems more efficiently. Heterogeneous systems give the developer greater freedom to speed up suitable algorithms by increasing the parallel programming options available, making it possible for different parts of a program to run on the dedicated hardware that suits them best. Several of the spatial problems that have not been optimised for heterogeneous systems cover very diverse areas that seem vastly different at first sight. However, they are closely related due to common data processing requirements, making them suitable for using dedicated hardware. In particular, this thesis provides new parallel approaches to tackle the following three crucial spatial problems: latent fingerprint identification, total viewshed computation, and path planning based on maximising visibility in large regions. Latent fingerprint identification is one of the essential identification procedures in criminal investigations. Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas

    Advances in Supercapacitor Technology and Applications â…ˇ

    Get PDF
    Energy storage is a key topic for research, industry, and business, which is gaining increasing interest. Any available energy-storage technology (batteries, fuel cells, flywheels, and so on) can cover a limited part of the power-energy plane and is characterized by some inherent drawback. Supercapacitors (also known as ultracapacitors, electrochemical capacitors, pseudocapacitors, or double-layer capacitors) feature exceptional capacitance values, creating new scenarios and opportunities in both research and industrial applications, partly because the related market is relatively recent. In practice, supercapacitors can offer a trade-off between the high specific energy of batteries and the high specific power of traditional capacitors. Developments in supercapacitor technology and supporting electronics, combined with reductions in costs, may revolutionize everything from large power systems to consumer electronics. The potential benefits of supercapacitors move from the progresses in the technological processes but can be effective by the availability of the proper tools for testing, modeling, diagnosis, sizing, management and technical-economic analyses. This book collects some of the latest developments in the field of supercapacitors, ranging from new materials to practical applications, such as energy storage, uninterruptible power supplies, smart grids, electrical vehicles, advanced transportation and renewable sources

    Seattle Pacific University Catalog 2011-2012

    Get PDF
    https://digitalcommons.spu.edu/archives_catalogs/1093/thumbnail.jp

    Seattle Pacific University Catalog 2012-2013

    Get PDF
    https://digitalcommons.spu.edu/archives_catalogs/1094/thumbnail.jp

    Towards Performance Portable Graph Algorithms

    Get PDF
    In today's data-driven world, our computational resources have become heterogeneous, making the processing of large-scale graphs in an architecture agnostic manner crucial. Traditionally, hand-optimized high-performance computing (HPC) solutions have been studied and used to implement highly efficient and scalable graph algorithms. In recent years, several graph processing and management systems have also been proposed. Hand optimized HPC approaches require high levels of expertise and graph processing frameworks suffer from expressibility and performance. Portability is a major concern for both approaches. The main thesis of this work is that block-based graph algorithms offer a compromise between efficient parallelism and architecture agnostic algorithm design for a wide class of graph problems. This dissertation seeks to prove this thesis by focusing the work on the three pillars; data/computation partitioning, block-based algorithm design, and performance portability. In this dissertation, we first show how we can partition the computation and the data to design efficient block-based algorithms for solving graph merging and triangle counting problems. Then, generalizing from our experiences, we propose an algorithmic framework, for shared-memory, heterogeneous machines for implementing block-based graph algorithms; PGAbB. PGAbB aims to maximally leverage different architectures by implementing a task-based execution on top of a block-based programming model. In this talk we will discuss PGAbB's programming model, algorithmic optimizations for scheduling, and load-balancing strategies for graph problems on real-world and synthetic inputs.Ph.D
    corecore