527 research outputs found
The Facility Location Problem
The purpose of this study was to analyze the location of an emergency facility location within a town based on given information from the village and to use the results to determine the optimal location for an emergency facility. A model of the problem was developed using a spreadsheet and computer program to record and analyze the optimal response time based on different locations of emergency facilities. Assumptions were made to create situations easily computed through spreadsheet and computer programs. Once calculated, information was used to create a framework of demand density across a gridded map. Once the computer program was updated to use the large amount of data, results were obtained. Based on data and modeling, the current location of emergency facility was not located in the most opportune locations and another location was deemed better suited for serving the community
Building Footprint Extraction from LiDAR Data and Imagery Information
This study presents an automatic method for regularisation of building outlines. Initially, building segments are extracted using a new fusion method. Data- and model-driven approaches are then combined to generate approximate building polygons. The core part of the method includes a novel data-driven algorithm based on likelihood equation derived from the geometrical properties of a building. Finally, the Gauss-Helmert and Gauss-Markov models adjustment are implemented and modified for regularisation of building outlines considering orthogonality constraints
A quadtree-based allocation method for a class of large discrete Euclidean location problems: large location problems
A special data compression approach using a quadtree-based method is proposed for allocating very large demand points to their nearest facilities while eliminating aggregation error. This allocation procedure is shown to be extremely effective when solving very large facility location problems in the Euclidian space. Our method basically aggregates demand points where it eliminates aggregation-based allocation error, and disaggregates them if necessary. The method is assessed first on the allocation problems and then embedded into the search for solving a class of discrete facility location problems namely the p-median and the vertex p-centre problems. We use randomly generated and TSP datasets for testing our method. The results of the experiments show that the quadtree-based approach is very effective in reducing the computing time for this class of location problems
Rail-Road terminal locations: aggregation errors and best potential locations on large networks
In network location problems, the number of potential locations is often too large in order to find a solution in a reasonable computing time. That is why aggregation techniques are often used to reduce the number of nodes. This reduction of the size of the location problems makes them more computationally tractable, but aggregation introduces errors into the solutions. Some of these errors will be estimated in this paper. A method that helps to isolate the best potential locations for rail-road terminals embedded in a hub-and-spoke network will further be outlined. Hub location problems arise when it is desirable to consolidate flows at certain locations called hubs. The basic idea is to use the flows of commodities and their geographic spreading as input to determine a set of potential locations for hub terminals. The exercise will be done for the trans-European networks. These potential locations can then further be used as input by an optimal location method
A review of network location theory and models
Cataloged from PDF version of article.In this study, we review the existing literature on network location problems.
The study has a broad scope that includes problems featuring desirable and
undesirable facilities, point facilities and extensive facilities, monopolistic and
competitive markets, and single or multiple objectives. Deterministic and
stochastic models as well as robust models are covered. Demand data
aggregation is also discussed. More than 500 papers in this area are reviewed
and critical issues, research directions, and problem extensions are emphasized.Erdoğan, Damla SelinM.S
UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
In classical computer vision, rectification is an integral part of multi-view
depth estimation. It typically includes epipolar rectification and lens
distortion correction. This process simplifies the depth estimation
significantly, and thus it has been adopted in CNN approaches. However,
rectification has several side effects, including a reduced field of view
(FOV), resampling distortion, and sensitivity to calibration errors. The
effects are particularly pronounced in case of significant distortion (e.g.,
wide-angle fisheye cameras). In this paper, we propose a generic scale-aware
self-supervised pipeline for estimating depth, euclidean distance, and visual
odometry from unrectified monocular videos. We demonstrate a similar level of
precision on the unrectified KITTI dataset with barrel distortion comparable to
the rectified KITTI dataset. The intuition being that the rectification step
can be implicitly absorbed within the CNN model, which learns the distortion
model without increasing complexity. Our approach does not suffer from a
reduced field of view and avoids computational costs for rectification at
inference time. To further illustrate the general applicability of the proposed
framework, we apply it to wide-angle fisheye cameras with 190
horizontal field of view. The training framework UnRectDepthNet takes in the
camera distortion model as an argument and adapts projection and unprojection
functions accordingly. The proposed algorithm is evaluated further on the KITTI
rectified dataset, and we achieve state-of-the-art results that improve upon
our previous work FisheyeDistanceNet. Qualitative results on a distorted test
scene video sequence indicate excellent performance
https://youtu.be/K6pbx3bU4Ss.Comment: Minor fixes added after IROS 2020 Camera ready submission. IROS 2020
presentation video - https://www.youtube.com/watch?v=3Br2KSWZRr
Granulometry, chemistry and physical interactions of non-colloidal particulate matter transported by urban storm water
Urban rainfall-runoff is a major source of anthropogenic pollutions to the natural water bodies. Particulate matter generated from anthropogenic environments and activities is a constituent of environmental concern as well as a carrier substrate for reactive contaminants such as metals. Partitioning, transport and transformation of particulate-bound contaminants are determined by the granulometry, physical and geochemical properties of the particulate carriers. Previous research emphasized in the transport of colloidal and suspended particles in rainfall-runoff. The settleable and sediment material were ignored though they are a major granulometric fraction which may contain most of the sorbed or transported constituents such as metals, organics or inorganics. In this research the entire flow section of rainfall-runoff was captured. Particulate matters in the catchment were analyzed for solid fractions, metal partitioning and distribution, fractal nature, morphology, chemical composition, and settling characteristics. Unsteady hydrodynamic conditions and short residence time determine coagulation and flocculation is still a dynamic mechanism in urban rainfall-runoff. Natural coagulation and flocculation (C/F) as well as coagulants/flocculants assisted C/F was studied for particles in urban rainfall-runoff. A C/F model incorporating fractal geometry and sedimentation mechanism was applied to simulate the particle size distribution in a 2-m settling column test. The overarching objective is to facilitate decision-making with respect to urban runoff management, regulations, treatment and potential disposal of runoff sediment residuals
- …