1 research outputs found

    Deep Blue is still an infant.

    No full text
    We believe that by the time of the workshop Deep Blue will have lost another match to Garry Kasparov, showing little improvement over the previous one. But even if it is indeed a "new kind of intelligence", it can be argued that this intelligence is very basic. There is a long way to go until Deep Blue could be considered a fully autonomous agent, the type we are eagerly trying to build in AI and that we believe we will eventually be successful at. We should not consider Deep Blue (or any software agent) fully autonomous until it can manage its own computational resources (memory and time), minimize risk and cost of various decisions, assess its errors and develop new representations of (chess) knowledge, and be able to cooperate and communicate with other computer and human chess players. We feel indeed that Deep Blue falls somewhere non-trivial on the scale of intelligence. But to move further along that scale greater autonomy will be required: Is Deep Blue now selecting its own openings, deciding when and whether to play for a draw, etc? Does it generalize from its past experiences, annotate its own games etc.? Does it know that is has played the agent "GKasparov" before? That the match will last 6 games (like last time) etc. Can it apply the information or decision theoretic model on which it is choosing its moves to other operations management and control situations? Does Deep Blue know that one mistake against the World Champion could be fatal? Does it know that bishops can only reach one color square? Does it contain a declarative representation of the rules of chess
    corecore