1,049 research outputs found

    ObjectFlow: A Descriptor for Classifying Traffic Motion

    Get PDF
    Abstract—We present and evaluate a novel scene descriptor for classifying urban traffic by object motion. Atomic 3D flow vectors are extracted and compensated for the vehicle’s egomo-tion, using stereo video sequences. Votes cast by each flow vector are accumulated in a bird’s eye view histogram grid. Since we are directly using low-level object flow, no prior object detection or tracking is needed. We demonstrate the effectiveness of the proposed descriptor by comparing it to two simpler baselines on the task of classifying more than 100 challenging video sequences into intersection and non-intersection scenarios. Our experiments reveal good classification performance in busy traffic situations, making our method a valuable complement to traditional approaches based on lane markings. I

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Bird\u27s Eye View: Cooperative Exploration by UGV and UAV

    Get PDF
    This paper proposes a solution to the problem of cooperative exploration using an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicle (UAV). More specifically, the UGV navigates through the free space, and the UAV provides enhanced situational awareness via its higher vantage point. The motivating application is search and rescue in a damaged building. A camera atop the UGV is used to track a fiducial tag on the underside of the UAV, allowing the UAV to maintain a fixed pose relative to the UGV. Furthermore, the UAV uses its front facing camera to provide a birds-eye-view to the remote operator, allowing for observation beyond obstacles that obscure the UGV’s sensors. The proposed approach has been tested using a TurtleBot 2 equipped with a Hokuyo laser ranger finder and a Parrot Bebop 2. Experimental results demonstrate the feasibility of this approach. This work is based on several open source packages and the generated code will be available online

    3D Object Detection and High-Resolution Traffic Parameters Extraction Using Low-Resolution LiDAR Data

    Full text link
    Traffic volume data collection is a crucial aspect of transportation engineering and urban planning, as it provides vital insights into traffic patterns, congestion, and infrastructure efficiency. Traditional manual methods of traffic data collection are both time-consuming and costly. However, the emergence of modern technologies, particularly Light Detection and Ranging (LiDAR), has revolutionized the process by enabling efficient and accurate data collection. Despite the benefits of using LiDAR for traffic data collection, previous studies have identified two major limitations that have impeded its widespread adoption. These are the need for multiple LiDAR systems to obtain complete point cloud information of objects of interest, as well as the labor-intensive process of annotating 3D bounding boxes for object detection tasks. In response to these challenges, the current study proposes an innovative framework that alleviates the need for multiple LiDAR systems and simplifies the laborious 3D annotation process. To achieve this goal, the study employed a single LiDAR system, that aims at reducing the data acquisition cost and addressed its accompanying limitation of missing point cloud information by developing a Point Cloud Completion (PCC) framework to fill in missing point cloud information using point density. Furthermore, we also used zero-shot learning techniques to detect vehicles and pedestrians, as well as proposed a unique framework for extracting low to high features from the object of interest, such as height, acceleration, and speed. Using the 2D bounding box detection and extracted height information, this study is able to generate 3D bounding boxes automatically without human intervention.Comment: 19 pages, 11 figures. This paper has been submitted for consideration for presentation at the 103rd Annual Meeting of the Transportation Research Board, January 202

    Intercomparison of UAV platforms for mapping snow depth distribution in complex alpine terrain

    Get PDF
    [EN]Unmanned Aerial Vehicles (UAVs) offer great flexibility in acquiring images in inaccessible study areas, which are then processed with stereo-matching techniques through Structure-from-Motion (SfM) algorithms. This procedure allows generating high spatial resolution 3D point clouds. The high accuracy of these 3D models allows the production of detailed snow depth distribution maps through the comparison of point clouds from different dates. In this way, UAVs allow monitoring of remote areas that were not achievable previously. The large number of works evaluating this novel technique has not, to date, conducted a systematic evaluation of concurrent snowpack observations with different UAV devices. Taking into account this, and also bearing in mind that potential users of this technique may be interested in exploiting ready-to-use commercial devices, we conducted an evaluation of the snow depth distribution maps with different commercial UAVs. During the 2018-19 snow season, two multi-rotors (Parrot Anafi and DJI Mavic Pro2) and one fixed-wing device (SenseFly eBee plus) were used on three different dates over a small test area (5 ha) within Izas Experimental Catchment in the Central Pyrenees. Simultaneously, snowpack distribution was retrieved with a Terrestrial Laser Scanner (TLS, RIEGL LPM-321) and was considered as ground truth. Three different georeferencing methods (Ground Control Points, ICP algorithm over snow-free areas and RTK-GPS positioning) were tested, showing equivalent performances under optimum illumination conditions. Additionally, for the three acquisition dates, both multi-rotors were flown at two distinct altitudes (50 and 75 m) to evaluate impact on the obtained snow depth maps. The evaluation with the TLS showed an equivalent performance of the two multi-rotors, with mean RMSE below 0.23 m and maximum volume deviations of less than 5%. Flying altitudes did not show significant differences in the obtained maps. These results were obtained under contrasted snow surface characteristics. This study reveals that under good illumination conditions and in relatively small areas, affordable commercial UAVs provide reliable estimations of snow distribution compared to more sophisticated and expensive close-range remote sensing techniques. Results obtained under overcast skies were poor, demonstrating that UAV observations require clear-sky conditions and acquisitions around noon to guarantee a homogenous illumination of the study area.This work was supported by the research projects of the Spanish Ministry of Economy and Competitiveness projects "El papel de la nieve en la hidrologia de la peninsula iberica y su respuesta a procesos de cambio global-CGL2017-82216-R" and the JPI-Climate co-funded call of the European Commission and INDECIS and CROSSDRO which are part of ERA4CS, and ERA-NET. Authors do not have any conflict of interest.). J. Revuelto is supported by a "Juan de la Cierva Incorporacion" postdoctoral fellow of the Spanish Ministry of Science, Innovation and Universities (Grant IJC2018-036260-I). I. Vidaller is supported by the Grant FPU18/04978 and is studying in the PhD program in the University of Zaragoza (Earth Science Department)

    Vehicular Instrumentation and Data Processing for the Study of Driver Intent

    Get PDF
    The primary goal of this thesis is to provide processed experimental data needed to determine whether driver intentionality and driving-related actions can be predicted from quantitative and qualitative analysis of driver behaviour. Towards this end, an instrumented experimental vehicle capable of recording several synchronized streams of data from the surroundings of the vehicle, the driver gaze with head pose and the vehicle state in a naturalistic driving environment was designed and developed. Several driving data sequences in both urban and rural environments were recorded with the instrumented vehicle. These sequences were automatically annotated for relevant artifacts such as lanes, vehicles and safely driveable areas within road lanes. A framework and associated algorithms required for cross-calibrating the gaze tracking system with the world coordinate system mounted on the outdoor stereo system was also designed and implemented, allowing the mapping of the driver gaze with the surrounding environment. This instrumentation is currently being used for the study of driver intent, geared towards the development of driver maneuver prediction models

    Enriching teaching practice through place, arts and culture: resources for in-service teachers of the Bering Strait School District

    Get PDF
    Master's Project (M.Ed.) University of Alaska Fairbanks, 2017The SILKAT (Sustaining Indigenous and Local Knowledge, Art and Teaching) project joins together the University of Alaska Fairbanks and the Bering Strait School District in an effort to celebrate the rich cultural arts and Indigenous knowledge of northwest Alaska and bring the knowledge and ingenuity of local artists and culture-bearers to the forefront of teaching practices and curriculum. This work presents the content and format of one teacher professional development module based on one of seven arts and place-based core teaching practices-the ability to elicit student thinking and facilitate reflective thinking in students. It also examines the development of two Art and Culture units, grade 3-Natural Landforms, and grade 5-Responsibility to Community, both rooted in the cultural values and knowledge of artists and culture-bearers from the region. The research completed for this project examines the supporting literature that forms the backbone for both the professional development module and the Art and Culture units, including core practices, the implications of place and culture-based arts education, Visible Thinking routines, protocols, Studio Habits of Thinking, and Understanding by Design. Following the research is a synopsis of the methods used to create the PD module and Art and Culture units, as well as the plans for dissemination within the Bering Strait School District to enhance the skills and knowledge of in-service teachers in arts and culture

    The SocRob Project: Soccer Robots or Society of Robots

    Get PDF

    Stereo panoramic vision for obstacle detection

    No full text
    Statistics show that automotive accidents occur regularly as a result of blind-spots and driver inattentiveness. Such incidents can have a large financial cost associated with them, as well as injury and loss of life. There are several methods currently available to assist drivers in avoiding such collisions. The simplest method is the installation of devices to increase the driver's field of view, such as extra mirrors and wide angle lenses. However, these still rely on an alert human observer and do not completely eliminate blind-spots. Another approach is to use an automated system which utilises sensors such as sonar or radar to gather range information. The range data is processed, and the driver is warned if a collision is immiment. Unfortunately, these systems have low angular resolution and limited sensing volumes. This was the motivation for developing a new method of obstacle detection. In this project, we have designed, built and evaluated a new type of sensor for blind spot monitoring. The stereo panoramic sensor consists of a video camera which views a convex mirrored surface. With the camera and mirror axes aligned, a full 360 degrees can be viewed perpendicular to the sensor axis. Two different mirror profiles were evaluatedthe constant gain, and resolution invariant surfaces. It was found that the constant gain mirror was the most effective for this application. It was shown that the sensor can be used to generate disparity maps from which obstacles can be segmented. This was done by applying the v-disparity algorithm, which has previously not been utilised in panoramic image processing. We found that this method was very powerful for segmenting objects, even the case of extremely noisy data. The average successful obstacle detection rate was found to be around 90%, with a false detecion rate of 8%. Our results indicate that range can be estimated reliably using a stereo panoramic sensor, with excellent angular accuracy in the azimuth direction. In ground truth experiments it was found that the sensor was able to estimate range to within 20cm of the true value, and a maximum angular error of 3°. Through experimentation, we determined that the physical system was approximately half as accurate in comparison to the simulations. However, it should be noted that the system is a prototype which could be developed futher. Nevertheless, this sensor still has the advantage of a much higher angular resolution and larger sensing volume than the driver assistance systems reported to date
    • …
    corecore