371 research outputs found

    POST MERGER FINANCIAL PERFORMANCE OF OKLAHOMA COOPERATIVES

    Get PDF
    Audited financial statements of 22 Oklahoma cooperatives were used to investigate the success of mergers in improving financial performance. Five categories of annual financial ratios were calculated for each firm. Paired difference tests were used to analyze the success of the merger in improving financial performance.Agribusiness,

    The influence of organisational typology, strategy, leadership and psychological forces on UK offshore oil and gas industry safety performance.

    Get PDF
    The UK Offshore Oil and Gas Industry is recognised as having made significant safety performance improvement progress, following the Piper Alpha disaster (6th July 1988), subsequent Public Inquiry and 106 recommendations made by the Cullen Report. However, accidents continue to occur on offshore assets due to leadership and organisational failures, poor behaviours, lack of operating discipline, asset integrity challenges and an absence of aligned safety strategy. Research was conducted through a strategic lens, looking across a typical operator company's value chain, and going beyond the predominant technical and engineering safety focus. Utilising safety climate as a leading indicator of safety performance, research explored the ways in which organisational typology, strategy, leadership and psychological forces contribute to safety performance on offshore assets. Research of this nature had not previously been conducted in the UK offshore oil and gas industry - triangulation of qualitative and quantitative data was utilised. Semi-structured interviews were conducted onshore with managers and supervisors to determine the organisational typology make-up of the value chain and associated safety strategy, with consideration for leadership and the psychological forces dynamic of human factors. An offshore workforce safety study was deployed at seven offshore assets. Under academic licence, the study utilised proven and validated data collection tools: authentic leadership questionnaire (ALQ); psychological capital questionnaire (PCQ); and the safety climate tool (SCT). The research identified organisational typology patterns across the value chain. Operator and contractor organisations were determined to typically identify as defenders and prospectors, while sub-contractors identified as analyzers and reactors. Considering safety performance at the offshore assets as measured by safety climate perception, it was concluded that organisational typology had no influence. There was no statistically significant difference between the safety performance indicator of safety climate perceptions across the typologies associated with the operator, contractor and sub-contractor value chain groups. Strict compliance with the operator control of work arrangements plus consistent operator safety messaging was concluded to be the mediating factor. Authentic leadership and psychological capital constructs were both demonstrated to be positively correlated with safety climate scores. Each of the seven assets studied returned 'Good' safety climate scores on a validated scoring system. However, there was no significant difference determined across operator, contractor and sub-contractor groups for safety climate scores by authentic leadership and psychological capital. Strict compliance with the operator control of work arrangements plus consistent operator safety messaging was again concluded to be the mediating factor. Persisting with current compliance-based practices was determined to possess a limiting effect over the ability to evolve from 'Good' to 'Excellent' safety climate scores in future offshore asset operations. Contributions to practice, knowledge and method were derived from the research findings and conclusion. Four specific recommendations were made for practice, plus four for future safety science research

    Beyond solvent exclusion: i-Motif detecting capability and an alternative DNA light-switching mechanism in a ruthenium(II) polypyridyl complex

    Get PDF
    Cytosine-rich DNA can fold into secondary structures known as i-motifs. Mounting experimental evidence suggests that these non-canonical nucleic acid structures form in vivo and play biological roles. However, to date, there are no optical probes able to identify i-motif in the presence of other types of DNA. Herein, we report for the first time the interactions between the three isomers of [Ru(bqp)2]2+ with i-motif, G-quadruplex, and double-stranded DNA. Each isomer has vastly different light-switching properties: mer is “on”, trans is “off”, and cis switches from “off” to “on” in the presence of all types of DNA. Using emission lifetime measurements, we show the potential of cis to light up and identify i-motif, even when other DNA structures are present using a sequence from the promoter region of the death-associated protein (DAP). Moreover, separated cis enantiomers revealed Λ-cis to have a preference for the i-motif, whereas Δ-cis has a preference for double-helical DNA. Finally, we propose a previously unreported light-switching mechanism that originates from steric compression and electronic effects in a tight binding site, as opposed to solvent exclusion. Our work suggests that many published non-emissive Ru complexes could potentially switch on in the presence biological targets with suitable binding sites, opening up a plethora of opportunity in the detection of biological molecules

    A technology assessment of alternative communications systems for the space exploration initiative

    Get PDF
    Telecommunications, Navigation, and Information Management (TNIM) services are vital to accomplish the ambitious goals of the Space Exploration Initiative (SEI). A technology assessment is provided for four alternative lunar and Mars operational TNIM systems based on detailed communications link analyses. The four alternative systems range from a minimum to a fully enhanced capability and use frequencies from S-band, through Ka-band, and up to optical wavelengths. Included are technology development schedules as they relate to present SEI mission architecture time frames

    PiPar: Pipeline parallelism for collaborative machine Learning

    Get PDF
    Funding: This work was sponsored by Rakuten Mobile, Inc., Japan.Collaborative machine learning (CML) techniques, such as federated learning, have been proposed to train deep learning models across multiple mobile devices and a server. CML techniques are privacy-preserving as a local model that is trained on each device instead of the raw data from the device is shared with the server. However, CML training is inefficient due to low resource utilization. We identify idling resources on the server and devices due to sequential computation and communication as the principal cause of low resource utilization. A novel framework PiPar that leverages pipeline parallelism for CML techniques is developed to substantially improve resource utilization. A new training pipeline is designed to parallelize the computations on different hardware resources and communication on different bandwidth resources, thereby accelerating the training process in CML. A low overhead automated parameter selection method is proposed to optimize the pipeline, maximizing the utilization of available resources. The experimental results confirm the validity of the underlying approach of PiPar and highlight that when compared to federated learning: (i) the idle time of the server can be reduced by up to 64.1×, and (ii) the overall training time can be accelerated by up to 34.6× under varying network conditions for a collection of six small and large popular deep neural networks and four datasets without sacrificing accuracy. It is also experimentally demonstrated that PiPar achieves performance benefits when incorporating differential privacy methods and operating in environments with heterogeneous devices and changing bandwidths.Peer reviewe

    DNNShifter : an efficient DNN pruning system for edge computing

    Get PDF
    Funding: This research is funded by Rakuten Mobile, Japan .Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter  , an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter  is a novel methodology that prunes sparse models using structured pruning - combining the accuracy-preserving benefits of unstructured pruning with runtime performance improvements of structured pruning. The pruned model variants generated by DNNShifter  are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter  generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter  produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter  has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches. DNNShifter  is available for public use from https://github.com/blessonvar/DNNShifter.Publisher PDFPeer reviewe

    DNNShifter: An Efficient DNN Pruning System for Edge Computing

    Full text link
    Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter, an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter is a novel methodology that prunes sparse models using structured pruning. The pruned model variants generated by DNNShifter are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches.Comment: 14 pages, 7 figures, 5 table

    EcoFed: Efficient Communication for DNN Partitioning-based Federated Learning

    Full text link
    Efficiently running federated learning (FL) on resource-constrained devices is challenging since they are required to train computationally intensive deep neural networks (DNN) independently. DNN partitioning-based FL (DPFL) has been proposed as one mechanism to accelerate training where the layers of a DNN (or computation) are offloaded from the device to the server. However, this creates significant communication overheads since the activation and gradient need to be transferred between the device and the server during training. While current research reduces the communication introduced by DNN partitioning using local loss-based methods, we demonstrate that these methods are ineffective in improving the overall efficiency (communication overhead and training speed) of a DPFL system. This is because they suffer from accuracy degradation and ignore the communication costs incurred when transferring the activation from the device to the server. This paper proposes EcoFed - a communication efficient framework for DPFL systems. EcoFed eliminates the transmission of the gradient by developing pre-trained initialization of the DNN model on the device for the first time. This reduces the accuracy degradation seen in local loss-based methods. In addition, EcoFed proposes a novel replay buffer mechanism and implements a quantization-based compression technique to reduce the transmission of the activation. It is experimentally demonstrated that EcoFed can significantly reduce the communication cost by up to 114x and accelerates training by up to 25.66x when compared to classic FL. Compared to vanilla DPFL, EcoFed achieves a 13.78x communication reduction and 2.83x training speed up

    EcoFed : efficient communication for DNN partitioning-based federated learning

    Get PDF
    Funding: This work was sponsored by Rakuten Mobile, Japan.Efficiently running federated learning (FL) on resource-constrained devices is challenging since they are required to train computationally intensive deep neural networks (DNN) independently. DNN partitioning-based FL (DPFL) has been proposed as one mechanism to accelerate training where the layers of a DNN (or computation) are offloaded from the device to the server. However, this creates significant communication overheads since the intermediate activation and gradient need to be transferred between the device and the server during training. While current research reduces the communication introduced by DNN partitioning using local loss-based methods, we demonstrate that these methods are ineffective in improving the overall efficiency (communication overhead and training speed) of a DPFL system. This is because they suffer from accuracy degradation and ignore the communication costs incurred when transferring the activation from the device to the server. This article proposes Eco Fed-a communication efficient framework for DPFL systems. Eco Fed-a eliminates the transmission of the gradient by developing pre-trained initialization of the DNN model on the device for the first time. This reduces the accuracy degradation seen in local loss-based methods. In addition, EcoFed proposes a novel replay buffer mechanism and implements a quantization-based compression technique to reduce the transmission of the activation. It is experimentally demonstrated that EcoFed can reduce the communication cost by up to 133× and accelerate training by up to 21× when compared to classic FL. Compared to vanilla DPFL, EcoFed achieves a 16× communication reduction and 2.86× training time speed-up. EcoFed is available from https://github.com/blessonvar/EcoFed .PostprintPeer reviewe
    • 

    corecore