262 research outputs found

    The Impact Of Safety On Fleet Acquisition And Management In U.S Commercial Airlines

    Get PDF
    The topic of aircraft safety is pervasive in many domains of the airline industry and it influences all types of air transportation operations. Aircraft acquisition and fleet planning are key functions in a commercial airline to ensure the achievement of the airline’s operational goals such as matching capacity with demand. With fluctuations in passenger demand, it is vital to strategically plan an airline’s fleet to best accommodate these changes and to safely do so. Existent literature suggests that aircraft safety is factored into passenger decision to choose an airline which then impacts the economics of an airline. The purpose of this study is to explore the impact of safety on fleet acquisition and management processes in commercial airlines in the U.S. The findings suggest that safety plays a major role in the aircraft acquisition and fleet management activities in commercial airlines and generates contributory variables that influence and are influenced by safety events in relation to an aircraft type. The results from this study serves as a conceptual framework for commercial airlines to better gauge the crucial elements that drive fleet planning decisions and to effectively execute strategic fleet management decisions

    Look and Modify: Modification Networks for Image Captioning

    Full text link
    Attention-based neural encoder-decoder frameworks have been widely used for image captioning. Many of these frameworks deploy their full focus on generating the caption from scratch by relying solely on the image features or the object detection regional features. In this paper, we introduce a novel framework that learns to modify existing captions from a given framework by modeling the residual information, where at each timestep the model learns what to keep, remove or add to the existing caption allowing the model to fully focus on "what to modify" rather than on "what to predict". We evaluate our method on the COCO dataset, trained on top of several image captioning frameworks and show that our model successfully modifies captions yielding better ones with better evaluation scores.Comment: Published in BMVC 201

    Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language Tasks

    Full text link
    Natural Language Explanations (NLE) aim at supplementing the prediction of a model with human-friendly natural text. Existing NLE approaches involve training separate models for each downstream task. In this work, we propose Uni-NLX, a unified framework that consolidates all NLE tasks into a single and compact multi-task model using a unified training objective of text generation. Additionally, we introduce two new NLE datasets: 1) ImageNetX, a dataset of 144K samples for explaining ImageNet categories, and 2) VQA-ParaX, a dataset of 123K samples for explaining the task of Visual Question Answering (VQA). Both datasets are derived leveraging large language models (LLMs). By training on the 1M combined NLE samples, our single unified framework is capable of simultaneously performing seven NLE tasks including VQA, visual recognition and visual reasoning tasks with 7X fewer parameters, demonstrating comparable performance to the independent task-specific models in previous approaches, and in certain tasks even outperforming them. Code is at https://github.com/fawazsammani/uni-nlxComment: Accepted to ICCVW 202

    Show, Edit and Tell: A Framework for Editing Image Captions

    Full text link
    Most image captioning frameworks generate captions directly from images, learning a mapping from visual features to natural language. However, editing existing captions can be easier than generating new ones from scratch. Intuitively, when editing captions, a model is not required to learn information that is already present in the caption (i.e. sentence structure), enabling it to focus on fixing details (e.g. replacing repetitive words). This paper proposes a novel approach to image captioning based on iterative adaptive refinement of an existing caption. Specifically, our caption-editing model consisting of two sub-modules: (1) EditNet, a language module with an adaptive copy mechanism (Copy-LSTM) and a Selective Copy Memory Attention mechanism (SCMA), and (2) DCNet, an LSTM-based denoising auto-encoder. These components enable our model to directly copy from and modify existing captions. Experiments demonstrate that our new approach achieves state-of-art performance on the MS COCO dataset both with and without sequence-level training.Comment: Accepted to CVPR 202

    Visualizing and Understanding Contrastive Learning

    Full text link
    Contrastive learning has revolutionized the field of computer vision, learning rich representations from unlabeled data, which generalize well to diverse vision tasks. Consequently, it has become increasingly important to explain these approaches and understand their inner workings mechanisms. Given that contrastive models are trained with interdependent and interacting inputs and aim to learn invariance through data augmentation, the existing methods for explaining single-image systems (e.g., image classification models) are inadequate as they fail to account for these factors. Additionally, there is a lack of evaluation metrics designed to assess pairs of explanations, and no analytical studies have been conducted to investigate the effectiveness of different techniques used to explaining contrastive learning. In this work, we design visual explanation methods that contribute towards understanding similarity learning tasks from pairs of images. We further adapt existing metrics, used to evaluate visual explanations of image classification systems, to suit pairs of explanations and evaluate our proposed methods with these metrics. Finally, we present a thorough analysis of visual explainability methods for contrastive learning, establish their correlation with downstream tasks and demonstrate the potential of our approaches to investigate their merits and drawbacks

    Benefits of Biochar Addition in a Sustainable Agriculture Practice: Soil Nutrients Dynamics, Enzyme Activities and Plant Growth

    Get PDF
    Biochar is a carbon-rich material resulting from the pyrolysis of plant and animal biomass. Biochar has a long history as a soil amendment for centuries since the Mayan civilization. Attaining sustainability in agriculture is not easy; however, the addition of biochar may reduce the adverse effects of numerous malpractices in conventional agriculture. Biochar benefits soil physicochemical properties such as soil bulk density, aggregate stability, porosity, water holding capacity and soil organic carbon content. However, it is essential to focus on the negative aspects of biochar in terms of atmospheric emissions during the production and occupational health and safety at the time of use. Still, there are many benefits and detriments of the application of biochar, i.e., the priming effect; thus, this review highlights the importance of further research on the application of biochar as a soil amendment. It has been understood that the lack of long-term field studies in various soils using commercially produced biochar may restrict the knowledge of biochar's true potential and effect on soil nutrient dynamics, microbial structure, and crop yield. Keywords: Land degradation, Biochar, Nutrient retention, Soil quality, Microbial communit
    corecore