Electromagnetic Spiral film [ML for the Web]
Making use of Runway's AI Magic Tools to generate a short video art film.
Workflow
I started generating an image with text to image an a prompt that read “electromagnetic field”
Then I sent that image to the image variations tool which I ran a few times before landing on an image that spoke to me.
From there ran Erase and Replace on one of the variations several times to create a few frames of the image transforming into a spriral
I then ran image variation to create a change in sequence.
Before running image to image a few times to manipulate the color of the image with the prompt “blue light fill”
Lasty I brought all of the images together in Frame Interpolation to create a short video.
https://user-images.githubusercontent.com/49932341/227036037-dc29c8fb-ea5d-4578-ae5c-85aad98ef783.mp4
Describe the results of working with the tool, do they match your expectations?
I believe the result is a pretty simple video with a slight computer graphics look and a strong AI generated look. I expected the Frame Interpolation to be more intelligent in creating an organic transistion between frames but it seems to just be a morph transition tool more than anything else. The text to image generations met my expectations in creating images that brought me joy in relation to my unrealistic prompts.
Can you "break" the tool? In other words, use it in a way that it was intended for and what kinds of results do you get?
By inputting concepts into the text-toimage tool rather than actual real life reference points, I often found the AI to be confused and producing unreadable text. Inputted texts such as “Fear”, “love”, and “security” all seems to break the AI’s flow of generation.
Can you find any pro tips in terms of prompt engineering?
If you are looking to generate images of abstract concepts, include more words of real-life reference points that may have actual recorded imagery in a database.
Compare and contrast working with Runway as a tool for machine learning as related to ml5.js, python, and any other tools explored this semester.
Working with AI Magic Tools in Runway felt like a much more passive experience than working in ml5.js for image generation/training and python for text generation in the past. The latter two allow much more control of the process of generation and thus, in my opinion, provide more satisfactory results.