Well, sort of.
From August 13 through August 17, the Tate tried an experiment. After hours (10pm-3am in the UK, 5-10pm in the eastern US time zone), robots roamed the darkened museum, controlled by random people from all corners of the globe. As the robots wandered through the museum rooms, museum staff commented on works of art that the robots happened to “see”.
I had to take a peek to see how the experiment was going.
It was pretty darn cool.
The images from the four robots and narration by the museum staff were streamed live. There was eeriness about wandering through a museum at night, artworks illuminated by the lights of the robots’ “eyes”. The screen was divided into fourths to relay the views from all four robots at once. On occasion a robot would go offline until museum personnel could investigate and reset the robot.
I found it exciting to be wandering around the Tate from my apartment in Indiana, but a few points were frustrating.
The narrators would discuss any interesting piece that one of the robots was seeing but it wasn’t always clear which one. The robots were rather spastic, never staying long at one object so usually by the time the narrators started discussing a work of art, the robot was no longer looking at it. Which made figuring out what the artwork was even more of a challenge.
I found myself wanting a more systematic tour and for the robots to stop and gaze at a work of art while narrators described the piece.
It will be interesting to see what the Tate learned from its experiment and what After Dark 2.0 will be like (assuming that there will be another iteration).