Neural City is my first attempt to create a video using a machine learning algorithm. Using NVIDIA's pix2pixHD machine learning script, I took a simple animation and completely transformed it. The dataset used to train this model is the Cityscapes dataset, hence the German looking features. Find links to the resources below:
All of the images in the video were generated by outputting color block images of digitally modeled cityscapes. I first tested this workflow by mocking up images using stock vector in illustrator.
Applying the pix2pix algorithm produced this output:
While the fidelity of the image is low, it was successful as a proof of concept. I continued to test various outputs against the algorithm to see how much variance it could handle.
I learned that its tolerances were quite restrictive. Essentially, because most of the data had been collected from a dashboard camera, all of the images should look like those taken from a dashboard camera, and anything that did not look like it was taken in that manner would result in a strange looking image as seen above.
Additional images from the video below: