5,227 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Developing A Road Freight Transport Performance Measurement System To Drive Sustainability:An Empirical Study Of Egyptian Road Freight Transport Companies

    Get PDF
    While several road freight performance measurement systems have been developed, only a limited number of quantified performance measurement frameworks encompassing diverse sets of performance metrics from multiple sustainable perspectives are available on a technological platform. These sets of metrics could be integrated as crucial performance indicators for assessing the operational performance of various road freight transport companies. These indicators include fuel efficiency, trip duration, vehicle loading, and cargo capacity. The objective of this research is to construct a conceptual road freight performance measurement framework that comprehensively incorporates performance elements from sustainable viewpoints (economic, environmental, and social), leveraging technology to measure the performance of road freight transport companies. This proposed framework aims to aid these companies in gauging their performance using technology, thus enhancing their operations towards sustainability.Within the road freight transport sector, several challenges exist, with congestion, road infrastructure maintenance, and driver training and qualifications being particularly pressing issues. The developed performance measurement framework offers the means for companies to evaluate the effects of technology integration on vehicles and overall performance. This allows companies to measure their performance from an operational standpoint rather than solely a strategic one, thereby identifying areas requiring improvement. Egypt was chosen as the empirical study location due to its relatively low level of technological integration within its road freight sector.This thesis employs an explanatory mixed methods approach, encompassing four distinct phases. The first phase entails a review to formulate the proposed theoretical performance measurement framework. Subsequently, the second phase involves conducting semi-structured interviews using a Delphi method to both develop a conceptual performance measurement framework and explore the present state of Egypt's road freight transport sector. Following this, the third phase encompasses surveys based on the results derived from Delphi analysis, involving diverse participants from the road freight transport industry. The aim is to validate the developed performance measurement framework through an empirical study conducted in Egypt. Lastly, the fourth phase centres around organizing focus groups involving stakeholders within road freight transport companies. The goal here is to propose a roadmap for implementing the developed road freight transport performance measurement framework within the Egyptian context.The primary theoretical contribution of this research is the development of a road freight transport performance measurement framework that integrates the three sustainability dimensions with technology. Additionally, this study offers practical guidance for the application of the developed framework in various countries and contexts. From a practical standpoint, this research aids road freight transport managers in evaluating their operational performance, thereby identifying challenges, devising action plans, and making informed decisions to mitigate these issues and enhance sustainability-oriented performance. Ultimately, the developed road freight transport performance measurement framework is poised to promote performance measurement aligned with technology, fostering progress towards achieving the sustainable development goals by 2030

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

    Get PDF
    Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy

    ENHANCING CLOUD SYSTEM RUNTIME TO ADDRESS COMPLEX FAILURES

    Get PDF
    As the reliance on cloud systems intensifies in our progressively digital world, understanding and reinforcing their reliability becomes more crucial than ever. Despite impressive advancements in augmenting the resilience of cloud systems, the growing incidence of complex failures now poses a substantial challenge to the availability of these systems. With cloud systems continuing to scale and increase in complexity, failures not only become more elusive to detect but can also lead to more catastrophic consequences. Such failures question the foundational premises of conventional fault-tolerance designs, necessitating the creation of novel system designs to counteract them. This dissertation aims to enhance distributed systems’ capabilities to detect, localize, and react to complex failures at runtime. To this end, this dissertation makes contributions to address three emerging categories of failures in cloud systems. The first part delves into the investigation of partial failures, introducing OmegaGen, a tool adept at generating tailored checkers for detecting and localizing such failures. The second part grapples with silent semantic failures prevalent in cloud systems, showcasing our study findings, and introducing Oathkeeper, a tool that leverages past failures to infer rules and expose these silent issues. The third part explores solutions to slow failures via RESIN, a framework specifically designed to detect, diagnose, and mitigate memory leaks in cloud-scale infrastructures, developed in collaboration with Microsoft Azure. The dissertation concludes by offering insights into future directions for the construction of reliable cloud systems

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    Strategy Tripod Perspective on the Determinants of Airline Efficiency in A Global Context: An Application of DEA and Tobit Analysis

    Get PDF
    The airline industry is vital to contemporary civilization since it is a key player in the globalization process: linking regions, fostering global commerce, promoting tourism and aiding economic and social progress. However, there has been little study on the link between the operational environment and airline efficiency. Investigating the amalgamation of institutions, organisations and strategic decisions is critical to understanding how airlines operate efficiently. This research aims to employ the strategy tripod perspective to investigate the efficiency of a global airline sample using a non-parametric linear programming method (data envelopment analysis [DEA]). Using a Tobit regression, the bootstrapped DEA efficiency change scores are further regressed to determine the drivers of efficiency. The strategy tripod is employed to assess the impact of institutions, industry and resources on airline efficiency. Institutions are measured by global indices of destination attractiveness; industry, including competition, jet fuel and business model; and finally, resources, such as the number of full-time employees, alliances, ownership and connectivity. The first part of the study uses panel data from 35 major airlines, collected from their annual reports for the period 2011 to 2018, and country attractiveness indices from global indicators. The second part of the research involves a qualitative data collection approach and semi-structured interviews with experts in the field to evaluate the impact of COVID-19 on the first part’s significant findings. The main findings reveal that airlines operate at a highly competitive level regardless of their competition intensity or origin. Furthermore, the unpredictability of the environment complicates airline operations. The efficiency drivers of an airline are partially determined by its type of business model, its degree of cooperation and how fuel cost is managed. Trade openness has a negative influence on airline efficiency. COVID-19 has toppled the airline industry, forcing airlines to reconsider their business model and continuously increase cooperation. Human resources, sustainability and alternative fuel sources are critical to airline survival. Finally, this study provides some evidence for the practicality of the strategy tripod and hints at the need for a broader approach in the study of international strategies
    • …
    corecore