DIGITAL DIARY
YOUTUBE WATCH HISTORY
A.I. MADE VIDEO
BACKGROUND
When Ruth Patir explained her assignment for the course she instructed, "Digital Diary", I immediately thought about white noise, being over stimulated by an endless stream of information that I've been bombarded with over the years. I thought about how it's almost impossible to remember it all and to make sense of it now. It all becomes a blur, a hum of inputs, a white noise machine.
We were encouraged to use data, not footage, to embrace A.I. instead of shunning it; so I went to Google Takeout (the place from which one downloads their data from Google) and chose to download my YouTube watch history. Why? Because YouTube is my digital home, I'm always there, consciously, in the background, as a radio, a TV, as my auto-deduct school; I love it there. So,I put in my request and a few hours later I received an HTML file weighing 53MB. Inside were 53,506 entries, or videos watched, going back almost eight years to May 26th 2017, containing the title of video with a link, the title of the channel with a link, and the timestamp at which the video was watched.
Originally I imagined uploading the HTML file to ChatGPT and extract information from all the data, but it was heavy to handle. So I went into manual mode, first I figured out that ChatGPT can handle up to 100 entries without lying and improvising when extracting information, then I asked my genius buddy Ron Danon to help me out. Ron installed a Python environment and a coding software on my computer and showed me how easy it is to generate Python scripts with ChatGPT without knowing how to code. By simply talking to the LLM and explaining what you want to achieve and how your folders are setup, it generated scripts that I copied, pasted and ran locally to extract information.
It started off with generating a script for extracting hundreds of TXT files, each containing 100 entries (with a thought for later uploading to ChatGPT), then generating scripts to extract information from all those TXT files, into one TXT file about my watching history. I noticed in the information that too many videos were unavailable. I checked, it's because they were removed, information lost forever. From all this text I wanted to create a visual, so a new script was generated to connect to YouTube's API and cross match and download all the thumbnails of all the entries from the TXT files and create corresponding folders for all of them. Then I created an image for the unavailable videos and asked ChatGPT to generate a script that completes the thumbnail folders with the unavailable video image for all the unavailable videos. Then, a script was generated to create an MP4 file, at 60fps at 1920x1080px, using a single thumbnail on a single frame, a strobe of 6o thumbnails per second or 3600 thumbnails per minute. Then I asked ChatGPT to add the title, channel and timestamp in white at the top left corner.
I stared at the this clockwork-orange-like madness and found a calming meditative peace in it, so many memories, thoughts and feelings, a white noise machine. A machine needs a hum, a hum of 3600BPM pulsing, vibrating, oscillating, like a mantra. I made a beat, added it, that was the only bit of using an editing software.
Please remember, this is a test, a sketch, a thought experiment. It's leading up to a documentary film, a digital diary about my 8 year journey in my digital home of YouTube, what have I consumed? what have I gained? what have I learned? have I changed?
