223 research outputs found

    Development of Reduced-Order Meshless Solutions of Three-Dimensional Navier Stokes Transport Phenomena

    Get PDF
    Emerging meshless technologies are very promising for numerically solving Euler and Navier-Stokes transport systems in one-, two-, and three-dimensions (3-D). The Reduced-Order Meshless (ROM) technique developed in this work is applicable to a wide array of transport physics systems (i.e., fluid flow, heat transfer, gas dynamics, internal combustion flow and chemical reactions, and solid-liquid mixture flow) with various types of boundary and initial conditions. Such applications to be benchmarked in this work include one- and two-dimensional advection, and two- and three-dimensional convection-diffusion problems (Burgers’ equation). Computational solutions to these boundary-value problems will be demonstrated using the ROM approach and the predicted solutions will be posted against the Meshless Local Petrov-Galerkin (MLPG) method and exact solutions to these problems when they exist. Extensions to 3-D phenomenology will be attempted based on the conclusions obtained from computational studies to establish the existence, smoothness, and boundedness of 3-D Navier-Stokes transport systems. An approximated benchmark solution of the Navier-Stokes equations is also developed in this work using a linearized perturbation analysis. The classical paper on gas turbine throughflow, Three Dimensional Flows in Turbomachines (Marble, 1964), outlines this procedure for approximation, and produces solutions for a class of axisymmetric problems. An investigation into the behavior of these solutions uncovered a series of inconsistencies in the paper, which are outlined in detail and corrected when known to be in error.This research was supported by The Ohio State University College of Engineering

    Ruth and Work

    Get PDF
    Introduction to the Book of Ruth Tragedy strikes the family of Ruth and Naomi (Ruth 1:1-22) God’s blessing is the source of human productivity (Ruth 2:1-4) God bestows his blessing of productivity through human labor (Ruth 2:5-7) Receiving God’s blessing of productivity means respecting co-workers (Ruth 2:8-16) God calls people to provide opportunities for the poor to work productively (Ruth 2:17-23) God’s law calls people of means to provide economic opportunities for the poor (Ruth 2:17-23) God leads individuals to provide economic opportunities for the poor and vulnerable (Ruth 2:17-23) God’s blessing is redoubled when people work according to his ways (Ruth 3:1-4:18) God works through human ingenuity (Ruth 3:1-18) God works through legal processes (Ruth 4:1-12) God works through the fruitfulness of childbearing (Ruth 4:13-18) Conclusions about the Book of Rut

    Estimating adaptive cruise control model parameters from on-board radar units

    Full text link
    Two new methods are presented for estimating car-following model parameters using data collected from the Adaptive Cruise Control (ACC) enabled vehicles. The vehicle is assumed to follow a constant time headway relative velocity model in which the parameters are unknown and to be determined. The first technique is a batch method that uses a least-squares approach to estimate the parameters from time series data of the vehicle speed, space gap, and relative velocity of a lead vehicle. The second method is an online approach that uses a particle filter to simultaneously estimate both the state of the system and the model parameters. Numerical experiments demonstrate the accuracy and computational performance of the methods relative to a commonly used simulation-based optimization approach. The methods are also assessed on empirical data collected from a 2019 model year ACC vehicle driven in a highway environment. Speed, space gap, and relative velocity data are recorded directly from the factory-installed radar unit via the vehicle's CAN bus. All three methods return similar mean absolute error values in speed and spacing compared to the recorded data. The least-squares method has the fastest run-time performance, and is up to 3 orders of magnitude faster than other methods. The particle filter is faster than real-time, and therefore is suitable in streaming applications in which the datasets can grow arbitrarily large.Comment: Accepted for poster presentation at the Transportation Research Board 2020 Annual Meeting, Washington D.

    Polygon Intersection-over-Union Loss for Viewpoint-Agnostic Monocular 3D Vehicle Detection

    Full text link
    Monocular 3D object detection is a challenging task because depth information is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular 3D detection methods also do not explicitly leverage scene homography or geometry during training, meaning that a model trained thusly can detect objects in images from arbitrary viewpoints. Such works predict the projections of the 3D bounding boxes on the image plane to estimate the location of the 3D boxes, but these projections are not rectangular so the calculation of IoU between these projected polygons is not straightforward. This work proposes an efficient, fully differentiable algorithm for the calculation of IoU between two convex polygons, which can be utilized to compute the IoU between two 3D bounding box footprints viewed from an arbitrary angle. We test the performance of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art viewpoint-agnostic 3D detection models. Experiments demonstrate that the proposed PIoU loss converges faster than L1 loss and that in 3D detection models, a combination of PIoU loss and L1 loss gives better results than L1 loss alone (+1.64% AP70 for MonoCon on cars, +0.18% AP70 for RTM3D on cars, and +0.83%/+2.46% AP50/AP25 for MonoRCNN on cyclists)

    The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera vehicle tracking

    Full text link
    This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context. Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length. 877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene. Lastly, existing algorithms are combined to benchmark a number of 3D multi-camera tracking pipelines on the dataset, with results indicating that the dataset is challenging due to the difficulty of matching objects traveling at high speeds across cameras and heavy object occlusion, potentially for hundreds of frames, during congested traffic. This work aims to enable the development of accurate and automatic vehicle trajectory extraction algorithms, which will play a vital role in understanding impacts of autonomous vehicle technologies on the safety and efficiency of traffic

    Virtual trajectories for I-24 MOTION: data and tools

    Full text link
    This article introduces a new virtual trajectory dataset derived from the I-24 MOTION INCEPTION v1.0.0 dataset to address challenges in analyzing large but noisy trajectory datasets. Building on the concept of virtual trajectories, we provide a Python implementation to generate virtual trajectories from large raw datasets that are typically challenging to process due to their size. We demonstrate the practical utility of these trajectories in assessing speed variability and travel times across different lanes within the INCEPTION dataset. The virtual trajectory dataset opens future research on traffic waves and their impact on energy
    • …
    corecore