I watched it earlier tonight and I noticed that maybe half of the people watching are also taking videos of the show using their mobile phones or video cams. Each captures in a unique location. That would mean at least 200 video captures of the same event at different locations. And that is just for one 10 minute segment. The show repeats every 30 minutes until 10:00 PM. We would actually have about 2,000 unique videos of the event.
Those video materials can in theory be used to create a bullet time rendering of the entire show.
Bullet time according to Wikipedia "is a visual effect or visual impression of detaching the time and space of a camera(or viewer) from that of its visible subject". This effect, implemented most famously in the movie "The Matrix", was achieved by firing multiple cameras in an array at the same time or sequentially. Frames caught at a single time are then arranged to create a slow-motion, orbiting effect like in the scene where Neo moves as fast as a bullet almost dodging it (hence its name.)
One would only need to do a few things to compile the videos. Stabilize the videos so that each capture would be fixed capture. Normalize the brightness and colors of all videos so that each image would have same tone with others. Normalize the resolution of the videos.
Once done, run an algorithm to analyse the location of the camera based on a baseline camera and the distances between reference points (also Christmas lights ). As reference for the timing, we will use the background sounds.
After all processing is done, we could, in theory, freeze the show at any given time then zoom our perspective to a hundred multiple viewpoints as if you are flying around and between the lights. Anyone can then enjoy the show from multiple vantage points like seeing in everyone's eyes. Now that would be a very different and amazing way to watch the show.