This year there have been several stories in the mainstream media about large strides made in artificial intelligence, accompanied by the occasional minor stumble. In March, AlphaGo, an AI program developed by Google DeepMind in London, thrashed the world’s leading Go player. What was particularly impressive about AlphaGo’s display was that it learned to play the game from scratch. Then, through a ceaseless, iterative process, it refined its game to the point that it currently has no peer player, machine or human, on the planet. Yet.
Last month, a team from Stanford University used a remote controlled human-like scubabot to pluck relics from a 350 year old wreck on the bed of the Mediterranean. The press release showed several pictures of the virtual diver, OceanOne, and it did look just like a human scuba diver, give or take the odd wire here and there. Why was this? Why did the human-designed scubabot look so human-like? Was this truly the most efficient design?
Through adaptation, nature has been incrementally refining ocean-bound species for millions of years. A vast array a deep sea creatures exists, many peculiarly adapted to traveling efficiently through all kinds of marine conditions, navigating accurately the darkest depths with pinpoint accuracy, plucking objects from reefs, and so on. On the other hand, humans are not particularly well designed for the sea. It’s the reason we have boats, and jetskis, and submarines and … scubabots.
I wonder if AlphaGo were redirected to play another game, an iterative design game to zero-in on optimal design, how would it have designed OceanOne?