84,220 research outputs found

    Intelligent optical performance monitor using multi-task learning based artificial neural network

    Full text link
    An intelligent optical performance monitor using multi-task learning based artificial neural network (MTL-ANN) is designed for simultaneous OSNR monitoring and modulation format identification (MFI). Signals' amplitude histograms (AHs) after constant module algorithm are selected as the input features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI simultaneously with higher accuracy and stability compared with single-task learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and OSNR monitoring root-mean-square error of 0.63 dB for the three modulation formats under consideration. Furthermore, the number of neuron needed for the single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity optical performance monitoring devices for real-time performance monitoring

    Safe2Ditch Steer-To-Clear Development and Flight Testing

    Get PDF
    This paper describes a series of small unmanned aerial system (sUAS) flights performed at NASA Langley Research Center in April and May of 2019 to test a newly added Steer-to-Clear feature for the Safe2Ditch (S2D) prototype system. S2D is an autonomous crash management system for sUAS. Its function is to detect the onset of an emergency for an autonomous vehicle, and to enable that vehicle in distress to execute safe landings to avoid injuring people on the ground or damaging property. Flight tests were conducted at the City Environment Range for Testing Autonomous Integrated Navigation (CERTAIN) range at NASA Langley. Prior testing of S2D focused on rerouting to an alternate ditch site when an occupant was detected in the primary ditch site. For Steer-to-Clear testing, S2D was limited to a single ditch site option to force engagement of the Steer-to-Clear mode. The implementation of Steer-to-Clear for the flight prototype used a simple method to divide the target ditch site into four quadrants. An RC car was driven in circles in one quadrant to simulate an occupant in that ditch site. A simple implementation of Steer-to- Clear was programmed to land in the opposite quadrant to maximize distance to the occupants quadrant. A successful mission was tallied when this occurred. Out of nineteen flights, thirteen resulted in successful missions. Data logs from the flight vehicle and the RC car indicated that unsuccessful missions were due to geolocation error between the actual location of the RC car and the derived location of it by the Vision Assisted Landing component of S2D on the flight vehicle. Video data indicated that while the Vision Assisted Landing component reliably identified the location of the ditch site occupant in the image frame, the conversion of the occupants location to earth coordinates was sometimes adversely impacted by errors in sensor data needed to perform the transformation. Logged sensor data was analyzed to attempt to identify the primary error sources and their impact on the geolocation accuracy. Three trends were observed in the data evaluation phase. In one trend, errors in geolocation were relatively large at the flight vehicles cruise altitude, but reduced as the vehicle descended. This was the expected behavior and was attributed to sensor errors of the inertial measurement unit (IMU). The second trend showed distinct sinusoidal error for the entire descent that did not always reduce with altitude. The third trend showed high scatter in the data, which did not correlate well with altitude. Possible sources of observed error and compensation techniques are discussed

    Ms Pac-Man versus Ghost Team CEC 2011 competition

    Get PDF
    Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. © 2011 IEEE
    • …
    corecore