84,220 research outputs found
Recommended from our members
Where Are My Intelligent Assistant's Mistakes? A Systematic Testing Approach
Intelligent assistants are handling increasingly critical tasks, but until now, end users have had no way to systematically assess where their assistants make mistakes. For some intelligent assistants, this is a serious problem: if the assistant is doing work that is important, such as assisting with qualitative research or monitoring an elderly parent’s safety, the user may pay a high cost for unnoticed mistakes. This paper addresses the problem with WYSIWYT/ML (What You See Is What You Test for Machine Learning), a human/computer partnership that enables end users to systematically test intelligent assistants. Our empirical evaluation shows that WYSIWYT/ML helped end users find assistants’ mistakes significantly more effectively than ad hoc testing. Not only did it allow users to assess an assistant’s work on an average of 117 predictions in only 10 minutes, it also scaled to a much larger data set, assessing an assistant’s work on 623 out of 1,448 predictions using only the users’ original 10 minutes’ testing effort
Intelligent optical performance monitor using multi-task learning based artificial neural network
An intelligent optical performance monitor using multi-task learning based
artificial neural network (MTL-ANN) is designed for simultaneous OSNR
monitoring and modulation format identification (MFI). Signals' amplitude
histograms (AHs) after constant module algorithm are selected as the input
features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and
PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI
simultaneously with higher accuracy and stability compared with single-task
learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and
OSNR monitoring root-mean-square error of 0.63 dB for the three modulation
formats under consideration. Furthermore, the number of neuron needed for the
single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity
optical performance monitoring devices for real-time performance monitoring
Safe2Ditch Steer-To-Clear Development and Flight Testing
This paper describes a series of small unmanned aerial system (sUAS) flights performed at NASA Langley Research Center in April and May of 2019 to test a newly added Steer-to-Clear feature for the Safe2Ditch (S2D) prototype system. S2D is an autonomous crash management system for sUAS. Its function is to detect the onset of an emergency for an autonomous vehicle, and to enable that vehicle in distress to execute safe landings to avoid injuring people on the ground or damaging property. Flight tests were conducted at the City Environment Range for Testing Autonomous Integrated Navigation (CERTAIN) range at NASA Langley. Prior testing of S2D focused on rerouting to an alternate ditch site when an occupant was detected in the primary ditch site. For Steer-to-Clear testing, S2D was limited to a single ditch site option to force engagement of the Steer-to-Clear mode. The implementation of Steer-to-Clear for the flight prototype used a simple method to divide the target ditch site into four quadrants. An RC car was driven in circles in one quadrant to simulate an occupant in that ditch site. A simple implementation of Steer-to- Clear was programmed to land in the opposite quadrant to maximize distance to the occupants quadrant. A successful mission was tallied when this occurred. Out of nineteen flights, thirteen resulted in successful missions. Data logs from the flight vehicle and the RC car indicated that unsuccessful missions were due to geolocation error between the actual location of the RC car and the derived location of it by the Vision Assisted Landing component of S2D on the flight vehicle. Video data indicated that while the Vision Assisted Landing component reliably identified the location of the ditch site occupant in the image frame, the conversion of the occupants location to earth coordinates was sometimes adversely impacted by errors in sensor data needed to perform the transformation. Logged sensor data was analyzed to attempt to identify the primary error sources and their impact on the geolocation accuracy. Three trends were observed in the data evaluation phase. In one trend, errors in geolocation were relatively large at the flight vehicles cruise altitude, but reduced as the vehicle descended. This was the expected behavior and was attributed to sensor errors of the inertial measurement unit (IMU). The second trend showed distinct sinusoidal error for the entire descent that did not always reduce with altitude. The third trend showed high scatter in the data, which did not correlate well with altitude. Possible sources of observed error and compensation techniques are discussed
Ms Pac-Man versus Ghost Team CEC 2011 competition
Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. © 2011 IEEE
- …