2,125 research outputs found

    Simplex Control Methods for Robust Convergence of Small Unmanned Aircraft Flight Trajectories in the Constrained Urban Environment

    Get PDF
    Constrained optimal control problems for Small Unmanned Aircraft Systems (SUAS) have long suffered from excessive computation times caused by a combination of constraint modeling techniques, the quality of the initial path solution provided to the optimal control solver, and improperly defining the bounds on system state variables, ultimately preventing implementation into real-time, on-board systems. In this research, a new hybrid approach is examined for real-time path planning of SUAS. During autonomous flight, a SUAS is tasked to traverse from one target region to a second target region while avoiding hard constraints consisting of building structures of an urban environment. Feasible path solutions are determined through highly constrained spaces, investigating narrow corridors, visiting multiple waypoints, and minimizing incursions to keep-out regions. These issues are addressed herein with a new approach by triangulating the search space in two-dimensions, or using a tetrahedron discretization in three-dimensions to define a polygonal search corridor free of constraints while alleviating the dependency of problem specific parameters by translating the problem to barycentric coordinates. Within this connected simplex construct, trajectories are solved using direct orthogonal collocation methods while leveraging navigation mesh techniques developed for fast geometric path planning solutions. To illustrate two-dimensional flight trajectories, sample results are applied to flight through downtown Chicago at an altitude of 600 feet above ground level. The three-dimensional problem is examined for feasibility by applying the methodology to a small scale problem. Computation and objective times are reported to illustrate the design implications for real-time optimal control systems, with results showing 86% reduction in computation time over traditional methods

    Low-latency, query-driven analytics over voluminous multidimensional, spatiotemporal datasets

    Get PDF
    2017 Summer.Includes bibliographical references.Ubiquitous data collection from sources such as remote sensing equipment, networked observational devices, location-based services, and sales tracking has led to the accumulation of voluminous datasets; IDC projects that by 2020 we will generate 40 zettabytes of data per year, while Gartner and ABI estimate 20-35 billion new devices will be connected to the Internet in the same time frame. The storage and processing requirements of these datasets far exceed the capabilities of modern computing hardware, which has led to the development of distributed storage frameworks that can scale out by assimilating more computing resources as necessary. While challenging in its own right, storing and managing voluminous datasets is only the precursor to a broader field of study: extracting knowledge, insights, and relationships from the underlying datasets. The basic building block of this knowledge discovery process is analytic queries, encompassing both query instrumentation and evaluation. This dissertation is centered around query-driven exploratory and predictive analytics over voluminous, multidimensional datasets. Both of these types of analysis represent a higher-level abstraction over classical query models; rather than indexing every discrete value for subsequent retrieval, our framework autonomously learns the relationships and interactions between dimensions in the dataset (including time series and geospatial aspects), and makes the information readily available to users. This functionality includes statistical synopses, correlation analysis, hypothesis testing, probabilistic structures, and predictive models that not only enable the discovery of nuanced relationships between dimensions, but also allow future events and trends to be predicted. This requires specialized data structures and partitioning algorithms, along with adaptive reductions in the search space and management of the inherent trade-off between timeliness and accuracy. The algorithms presented in this dissertation were evaluated empirically on real-world geospatial time-series datasets in a production environment, and are broadly applicable across other storage frameworks

    FPGAs in Bioinformatics: Implementation and Evaluation of Common Bioinformatics Algorithms in Reconfigurable Logic

    Get PDF
    Life. Much effort is taken to grant humanity a little insight in this fascinating and complex but fundamental topic. In order to understand the relations and to derive consequences humans have begun to sequence their genomes, i.e. to determine their DNA sequences to infer information, e.g. related to genetic diseases. The process of DNA sequencing as well as subsequent analysis presents a computational challenge for recent computing systems due to the large amounts of data alone. Runtimes of more than one day for analysis of simple datasets are common, even if the process is already run on a CPU cluster. This thesis shows how this general problem in the area of bioinformatics can be tackled with reconfigurable hardware, especially FPGAs. Three compute intensive problems are highlighted: sequence alignment, SNP interaction analysis and genotype imputation. In the area of sequence alignment the software BLASTp for protein database searches is exemplarily presented, implemented and evaluated.SNP interaction analysis is presented with three applications performing an exhaustive search for interactions including the corresponding statistical tests: BOOST, iLOCi and the mutual information measurement. All applications are implemented in FPGA-hardware and evaluated, resulting in an impressive speedup of more than in three orders of magnitude when compared to standard computers. The last topic of genotype imputation presents a two-step process composed of the phasing step and the actual imputation step. The focus lies on the phasing step which is targeted by the SHAPEIT2 application. SHAPEIT2 is discussed with its underlying mathematical methods in detail, and finally implemented and evaluated. A remarkable speedup of 46 is reached here as well

    Wide-Angle Multistatic Synthetic Aperture Radar: Focused Image Formation and Aliasing Artifact Mitigation

    Get PDF
    Traditional monostatic Synthetic Aperture Radar (SAR) platforms force the user to choose between two image types: larger, low resolution images or smaller, high resolution images. Switching to a Wide-Angle Multistatic Synthetic Aperture Radar (WAM-SAR) approach allows formation of large high-resolution images. Unfortunately, WAM-SAR suffers from two significant implementation problems. First, wavefront curvature effects, non-linear flight paths, and warped ground planes lead to image defocusing with traditional SAR processing methods. A new 3-D monostatic/bistatic image formation routine solves the defocusing problem, correcting for all relevant wide-angle effects. Inverse SAR (ISAR) imagery from a Radar Cross Section (RCS) chamber validates this approach. The second implementation problem stems from the large Doppler spread in the wide-angle scene, leading to severe aliasing problems. This research effort develops a new anti-aliasing technique using randomized Stepped-Frequency (SF) waveforms to form Doppler filter nulls coinciding with aliasing artifact locations. Both simulation and laboratory results demonstrate effective performance, eliminating more than 99% of the aliased energy

    Satellite Communications

    Get PDF
    This study is motivated by the need to give the reader a broad view of the developments, key concepts, and technologies related to information society evolution, with a focus on the wireless communications and geoinformation technologies and their role in the environment. Giving perspective, it aims at assisting people active in the industry, the public sector, and Earth science fields as well, by providing a base for their continued work and thinking

    Medical image analysis methods for anatomical surface reconstruction using tracked 3D ultrasound

    Get PDF
    The thesis focuses on a study of techniques for acquisition and reconstruction of surface data from anatomical objects by means of tracked 3D ultrasound. In the context of the work two experimental scanning systems are developed and tested on both artificial objects and biological tissues. The first system is based on the freehand ultrasound principle and utilizes a conventional 2D ultrasound transducer coupled with an electromechanical 3D position tracker. The main properties and the basic features of this system are discussed. A number of experiments show that its accuracy in the close to ideal conditions reaches 1.2 mm RMS. The second proposed system implements the sequential triggered scanning approach. The system consists of an ultrasound machine, a workstation and a scanning body (a moving tank filled with liquid and a transducer fixation block) that performs transducer positioning and tracking functions. The system is tested on artificial and real bones. The performed experiments illustrate that it provides significantly better accuracy than the freehand ultrasound (about 0.2 mm RMS) and allows acquiring regular data with a good precision. This makes such a system a promising tool for orthopaedic and trauma surgeons during contactless X-ray-free examinations of injured extremities. The second major subject of the thesis concerns development of medical image analysis methods for 3D surface reconstruction and 2D object detection. We introduce a method based on mesh-growing surface reconstruction that is designed for noisy and sparse data received from 3D tracked ultrasound scanners. A series of experiments on synthetic and ultrasound data show an appropriate reconstruction accuracy. The reconstruction error is measured as the averaged distance between the faces of the mesh and the points from the cloud. Dependently on the initial settings of the method the error varies in range 0.04 - 0.2% for artificial data and 0.3 - 0.7 mm for ultrasound bone data. The reconstructed surfaces correctly interpolate the original point clouds and demonstrate proper smoothness. The next significant problem considered in the work is 2D object detection. Although medical object detection is not integrated into the developed scanning systems, it can be used as a possible further extension of the systems for automatic detection of specific anatomical structures. We analyse the existent object detection methods and introduce a modification of the one based on the popular Generalized Hough Transform (GHT). Unlike the original GHT, the developed method is invariant to rotation and uniform scaling, and uses an intuitive two-point parametrization. We propose several implementations of the feature-to-vote conversion function with the corresponding vote analysis principles. Special attention is devoted to a study of the hierarchical vote analysis and its probabilistic properties. We introduce a parameter space subdivision strategy that reduces the probability of vote peak omission, and show that it can be efficiently implemented in practice using the Gumbel probability distribution

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education

    Full text link
    In the rapidly evolving landscape of education, digital technologies have repeatedly disrupted traditional pedagogical methods. This paper explores the latest of these disruptions: the potential integration of large language models (LLMs) and chatbots into graduate engineering education. We begin by tracing historical and technological disruptions to provide context and then introduce key terms such as machine learning and deep learning and the underlying mechanisms of recent advancements, namely attention/transformer models and graphics processing units. The heart of our investigation lies in the application of an LLM-based chatbot in a graduate fluid mechanics course. We developed a question bank from the course material and assessed the chatbot's ability to provide accurate, insightful responses. The results are encouraging, demonstrating not only the bot's ability to effectively answer complex questions but also the potential advantages of chatbot usage in the classroom, such as the promotion of self-paced learning, the provision of instantaneous feedback, and the reduction of instructors' workload. The study also examines the transformative effect of intelligent prompting on enhancing the chatbot's performance. Furthermore, we demonstrate how powerful plugins like Wolfram Alpha for mathematical problem-solving and code interpretation can significantly extend the chatbot's capabilities, transforming it into a comprehensive educational tool. While acknowledging the challenges and ethical implications surrounding the use of such AI models in education, we advocate for a balanced approach. The use of LLMs and chatbots in graduate education can be greatly beneficial but requires ongoing evaluation and adaptation to ensure ethical and efficient use.Comment: 44 pages, 16 figures, preprint for PLOS ON
    • …
    corecore