576 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Optimal Sequencing and Scheduling Algorithm for Traffic Flows Based on Extracted Control Actions Near the Airport

    Get PDF
    This dissertation seeks to design an optimization algorithm, based on naturalistic flight data, with emphasis on safety to perform a benefits\u27 analysis when sequencing and scheduling aircraft at the runway. The viability of creating a decision-support tool to aid air traffic controllers in sequencing and optimizing airport operations is evaluated through the benefits\u27 analysis. Air traffic control is a complex and critical system that ensures the safe and efficient movement of aircraft within the airspace. This is particularly true in the immediate vicinity of an airport. Unlike in en-route or terminal area airspace where aircraft usually traverse well established routes and procedures, near the airport after completing a standard arrival procedure, the routes to the final approach are only partially defined. With safety being the foremost priority, the local tower controllers monitor and maintain separation between aircraft to prevent collisions and ensure the overall safety of the airspace. This involves constant surveillance, coordination, and decision-making to manage the dynamic movement of aircraft, changing weather conditions, and potential hazards. All the while, the controllers make decisions regarding tromboning or vectoring based on various factors, including traffic volume, airspace restrictions, weather conditions, operational efficiency, and safety considerations to ensure a safe traffic sequencing of aircraft at the runway. A novel framework is presented for modeling, characterizing, and clustering aircraft trajectories by extracting traffic control decisions of air traffic controllers. A hidden Markov model was developed and applied to transform trajectories from a sequence of temporal spatial position reports to a series of control actions. The edit distance is utilized for quantifying the dissimilarity of two variable-length trajectory strings, followed by the application of k-medoids algorithm to cluster the arrival flows. Next, a repeatable process for detecting and labeling outlier trajectories within a cluster is introduced. Through application on a set of historical trajectories at Ronald Reagan Washington National Airport (DCA), it is demonstrated that the proposed clustering framework overcomes the deficiency of the classical approach and successfully captures the arrival flows of trajectories, that undergo similar control actions. Leveraging on the set of arrival flows, statistical and machine learning models of air traffic controllers are created and evaluated when ordering aircraft to land at the runway. The potential inefficiencies are identified at DCA when sequencing aircraft. As such, there is a potential performance gap, and it appears that there is room for additional sequence optimization. With the goal of overcoming the potential inefficiencies at DCA, a mixed-integer zero-one formulation is designed for a single runway that takes into consideration safety constraints by means of separation constraints between aircraft imposed at each metering point from the entry to the airspace until landing. With the objective of maximizing runway throughput and minimizing the traversed distance, the model sequences and schedules arrivals and departures and generates safe and conflict-free arrival trajectories to actualize that scheduling. The output of the optimization shows that the model successfully recovers approximately 52% of the performance gap between the actual distance traversed and idealized (cluster centroids) distance traversed by all arrival aircraft. Moreover, each arrival aircraft, on average, traverses 2.12 nautical miles shorter than its historical trajectory and thus saving approximately 10 US gallons of jet fuel. By showcasing the potential benefits of the optimization, this dissertation takes a step towards achieving the long-term vision of developing a decision-support tool to assist air traffic controllers in optimally sequencing and scheduling aircraft. To fully leverage the potential benefits of optimization, further development and refinement of the algorithm are necessary to align it with real-world operational demands. As future work, the research would be expanded to integrate uncertainties like weather conditions, wind directions, etc. into the optimization

    Property valuation with interpretable machine learning

    Get PDF
    Property valuation is an important task for various stakeholders, including banks, local authorities, property developers, and brokers. As a result of the characteristics of the real estate market, such as the infrequency of trades, limited supply, negotiated prices, and small submarkets with unique traits, there is no clear market value for properties. Traditionally property valuations are done by expert appraisers. Property valuation can also be done accurately with machine learning methods, but the lack of interpretability with accurate machine learning methods can limit the adoption of those methods. Interpretable machine learning methods could be a solution to this issue, but there are concerns related to the accuracy of these methods. This thesis aims to evaluate the feasibility of interpretable machine learning methods in property valuation by comparing a promising interpretable method to a more complex machine learning method that has had good results in property valuation previously. The promising interpretable method and the well-performed machine learning method are chosen based on previous literature. The two chosen methods, Extreme Gradient Boosting (XGB) and Explainable Boosting Machine (EBM) are compared in terms of prediction accuracy of properties in six big municipalities of Denmark. In addition to the accuracy comparison, the interpretability of the EBM is highlighted. The accuracy of the XGB method is better, even though there are no big differences between the two methods in individual municipalities. The interpretability of the EBM is good, as it is possible to understand, how the model makes predictions in general, and how individual predictions are made

    Proc. 33. Workshop Computational Intelligence, Berlin, 23.-24.11.2023

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 33. Workshops „Computational Intelligence“ der vom 23.11. – 24.11.2023 in Berlin stattfindet. Die Schwerpunkte sind Methoden, Anwendungen und Tools für ° Fuzzy-Systeme, ° Künstliche Neuronale Netze, ° Evolutionäre Algorithmen und ° Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen.The workshop proceedings contain the contributions of the 33rd workshop "Computational Intelligence" which will take place from 23.11. - 24.11.2023 in Berlin. The focus is on methods, applications and tools for ° Fuzzy systems, ° Artificial Neural Networks, ° Evolutionary algorithms and ° Data mining methods as well as the comparison of methods on the basis of industrial and benchmark problems

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process

    Operational Research: methods and applications

    Get PDF
    This is the final version. Available on open access from Taylor & Francis via the DOI in this recordThroughout its history, Operational Research has evolved to include methods, models and algorithms that have been applied to a wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first summarises the up-to-date knowledge and provides an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion and used as a point of reference by a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes

    An integrated machine learning and experimental approach to uncover ageing-associated processes in Fission Yeast

    Get PDF
    This work attempts to bring together knowledge of different pathways associated with cellular ageing and create connections between them using both machine learning and experimental methods. Initially, I describe the development of a novel proxy for chronological lifespan as part of the analysis pipeline of a high-throughput chronological lifespan assay in fission yeast. I then use this technique to go on to develop novel machine learning models that can predict lifespan, a complex phenotype, from simple traits, and identify ageing-associated phenotypes in fission yeast. Complementary to this, I investigate a transcription factor of interest, Hsr1, for its involvement in cellular ageing and ageing-associated processes. I describe direct regulatory targets and how it forms a network with at least four other ageing-associated transcription factors which bridges the gaps between models of ageing, and suggest mechanisms for these interactions. In this way, this work provides novel links between cellular ageing mechanisms and ageing-associated processes from both machine learning and experimental sources

    Structured parallelism discovery with hybrid static-dynamic analysis and evaluation technique

    Get PDF
    Parallel computer architectures have dominated the computing landscape for the past two decades; a trend that is only expected to continue and intensify, with increasing specialization and heterogeneity. This creates huge pressure across the software stack to produce programming languages, libraries, frameworks and tools which will efficiently exploit the capabilities of parallel computers, not only for new software, but also revitalizing existing sequential code. Automatic parallelization, despite decades of research, has had limited success in transforming sequential software to take advantage of efficient parallel execution. This thesis investigates three approaches that use commutativity analysis as the enabler for parallelization. This has the potential to overcome limitations of traditional techniques. We introduce the concept of liveness-based commutativity for sequential loops. We examine the use of a practical analysis utilizing liveness-based commutativity in a symbolic execution framework. Symbolic execution represents input values as groups of constraints, consequently deriving the output as a function of the input and enabling the identification of further program properties. We employ this feature to develop an analysis and discern commutativity properties between loop iterations. We study the application of this approach on loops taken from real-world programs in the OLDEN and NAS Parallel Benchmark (NPB) suites, and identify its limitations and related overheads. Informed by these findings, we develop Dynamic Commutativity Analysis (DCA), a new technique that leverages profiling information from program execution with specific input sets. Using profiling information, we track liveness information and detect loop commutativity by examining the code’s live-out values. We evaluate DCA against almost 1400 loops of the NPB suite, discovering 86% of them as parallelizable. Comparing our results against dependence-based methods, we match the detection efficacy of two dynamic and outperform three static approaches, respectively. Additionally, DCA is able to automatically detect parallelism in loops which iterate over Pointer-Linked Data Structures (PLDSs), taken from wide range of benchmarks used in the literature, where all other techniques we considered failed. Parallelizing the discovered loops, our methodology achieves an average speedup of 3.6× across NPB (and up to 55×) and up to 36.9× for the PLDS-based loops on a 72-core host. We also demonstrate that our methodology, despite relying on specific input values for profiling each program, is able to correctly identify parallelism that is valid for all potential input sets. Lastly, we develop a methodology to utilize liveness-based commutativity, as implemented in DCA, to detect latent loop parallelism in the shape of patterns. Our approach applies a series of transformations which subsequently enable multiple applications of DCA over the generated multi-loop code section and match its loop commutativity outcomes against the expected criteria for each pattern. Applying our methodology on sets of sequential loops, we are able to identify well-known parallel patterns (i.e., maps, reduction and scans). This extends the scope of parallelism detection to loops, such as those performing scan operations, which cannot be determined as parallelizable by simply evaluating liveness-based commutativity conditions on their original form
    • …
    corecore