MOVING
STILL

<intro>

"Don't paint from nature too much. Art is an abstraction. Derive this abstraction from nature while dreaming before it, and think more of the creation that will result."
- Paul Gauguin
Trailer for "Moving Still" (2022).

/ / What is Moving Still?

Moving Still is a 13-minute, experimental short movie and art installation.

It takes on a odyssey through constantly morphing and pulsating nature scenes, with an eerie, dreamlike atmosphere. The visuals, both interconnected and disintegrating, evoke a haunting liminal space.

An evergrowing stem of memories, past or present? Clutching onto silhouettes and shadows in a world too fast to perceive, running into the unknown abyss.


//Project Info

data = {
    "date": "2022-05" //+ continuous work till present
    "type": "personal project"
    "contributor": "Benno Schulze"
    
    "category": [
        "EXPERIMENTAL",
        "SHORT_FILM",
        "GAUGAN2"
        "ART_INSTALLATION"
   ]
}

<GALLERY>

GAUGAN    2
185A
185
185
Still Frame: 0185
GAUGAN    2
185A
1144
1144
Still Frame: 1144
GAUGAN    2
1213A
1213
1213
Still Frame: 1213
GAUGAN    2
1622A
1622
1622
Still Frame: 1622
GAUGAN    2
2221A
2221
2221
Still Frame: 2221
GAUGAN    2
2360A
2360
2360
Still Frame: 2360
GAUGAN    2
2644A
2644
2644
Still Frame: 2644
GAUGAN    2
2929A
2929
2929
Still Frame: 2929
GAUGAN    2
6850A
680
6850
Still Frame: 6850
GAUGAN    2
6964A
694
6964
Still Frame: 6964
GAUGAN    2
7733A
7733
7733
Still Frame: 7733
GAUGAN    2
8109A
8109
8109
Still Frame: 8109
GAUGAN    2
8513A
8513
8513
Still Frame: 8513
GAUGAN    2
10393A
10393
10393
Still Frame: 10393
GAUGAN    2
10779A
10779
10779
Still Frame: 10779
GAUGAN    2
11466A
11466
11466
Still Frame: 11466
GAUGAN    2
11980A
11980
11980
Still Frame: 11980
GAUGAN    2
12131A
12131
12131
Still Frame: 12131
GAUGAN    2
12561A
12561
12561
Still Frame: 12561
GAUGAN    2
12893A
12893
12893
Still Frame: 13893
GAUGAN    2
13068A
1368
13068
Still Frame: 13068
GAUGAN    2
13332A
13332
13332
Still Frame: 13332
GAUGAN    2
13819A
13819
13819
Still Frame: 13819
GAUGAN    2
14453A
14453
14453
Still Frame: 14453
GAUGAN    2
14628A
14628
14628
Still Frame: 14628
GAUGAN    2
14717A
14717
14717
Still Frame: 14717
GAUGAN    2
14866A
14866
14866
Still Frame: 14866
GAUGAN    2
15331A
15331
15331
Still Frame: 15331
GAUGAN    2
15480A
15480
15480
Still Frame: 15480
GAUGAN    2
15748A
15748
15748
Still Frame: 15748
GAUGAN    2
17323A
17323
17323
Still Frame: 17323
GAUGAN    2
185A
185
185
Still Frame: 17486
GAUGAN    2
17866A
17866
17866
Still Frame: 17866
GAUGAN    2
18061A
18061
18061
Still Frame: 18061
GAUGAN    2
18249A
18249
18249
Still Frame: 18249
GAUGAN    2
18287A
18287
18287
Still Frame: 18287
GAUGAN    2
18358A
18358
18358
Still Frame: 18358
GAUGAN    2
18435A
18435
18435
Still Frame: 18435
GAUGAN    2
20375A
20375
20375
Still Frame: 20375
GAUGAN    2
22205A
22205
22205
Still Frame: 22205
GAUGAN    2
22246A
22246
22246
Still Frame: 22246
GAUGAN    2
23140A
23140
23140
Still Frame: 23140

<inFOS>

/ / Project Context

Moving Still was created as a passion project, stemming from lengthy experiments (more about that in the project insight) with GauGAN Beta and, later on, GauGAN 2. I found the basic concept of being able to produce artificial, photorealistic scenes of nature simply immensely intriguing.

What I found even more fascinating, however, were the technical aspects—the inner workings of the GAN. To understand how it works, dissect its processes, test its limits, and use its weaknesses as a stylistic device, rather than trying to create a perfect copy of the reality.

To support and enhance the visual narrative with the use of AI became my primary focus.

From what I learned about GANs, I always drew parallels to the human brain: neurons firing, creating artificial imagery right before your very own eyes. You can imagine the shape of a house, the number of windows, the color of the door, and, drawing from images you've seen and environmental influences (essentially the training data), your brain fills in the shapes to produce a somewhat realistic image with ease.

Back to the GAN, the strong divergence between it´sindividual video frames stems directly from the limited capabilities of the GauGan Beta (2019) / GauGAN 2 (2021), developed by Taesung, Park et al. at NVIDIA Research AI. Although it is no more available, it was (from my knowledge) the first image generator made available to the public.

The GAN (Generative Adversarial Network) was trained on 10 million—unconnected—reference images of landscapes and, as such, lacks frame consistency since video synthesis was never part of its training data.

Even though, I created the first version of the short film back in 2022, since then, I´ve done multiple additions to both the visual and auditive layer and still have things to work and experiment with out of pure joy for the base idea. Some of those changes found it´s way to the project insight.
Further technical input / documentation on GauGAN2

data = {
    "web-resources": [
        "Semantic Image Synthesis with Spatially-Adaptive Normalization",
        //[Taesung Park;Ming-Yu_Liu;Ting-Chun_Wang;Jun-Yan;Zhu]
        //[arxiv.org][PDF]
        "Understanding GauGAN",
        //[Ayoosh_Kathuria]
        //[paperspace.com]
        //[Part1]:_Unraveling_Nvidia's_Landscape_Painting_GANs
        //[Part2]:_Training_on_Custom_Datasets
        //[Part3]:_Model_Evaluation_Techniques
        //[Part4]:_Debugging Training & Deciding If GauGAN Is Right For You
        "GauGAN for conditional image generation",
        //[Soumik_Rakshit;Sayak_Paul]
        //[keras.io]

 

/ / Concept

The lack of frame consistency, which results in a surreal, abstract pulsation of shapes and edges, abrupt changes in lighting moods, or even the complete replacement of objects is setting a new layer of narration. The image surface is held together solely by the silhouette and composition of its visual elements.

Further discomfort, intentionally evoked in the recipient, stems from the dissonance between various visual elements within a single frame. While the camera pans and objects or trees move, other elements, such as the ground, appear to remain static. Depending on the subjective focus of the viewer, the scenes, despite their linear progression, can therefore have a completely different impact and perceived controllable component. Apart from the image-controlling segmentation maps (LINK), the outcome is entirely left to the GAN. The recipient is watching a virtual, artificial copy of a landscape that never existed, or did it?

On another immersive level—parallel to the video—the auditory layer creates its own abstraction of the senses. At first, there are low-frequency sound effects, barely consciously perceptible, such as an almost omnipresent rattling, the playback of memories or video frames, similar to an old film projector. During certain phases, calibrated highs and lows offer the viewer moments to dive in as well as moments to breathe. The intra-diegetic soundscape is enriched with subtle, experimental music elements created by Azure Studios.
Frame 6964 (Moving Still)

<INSIGHT>

Excerpt from the work in progress material, showcasing the steady improvement of quality and animations.

    "enabledSoftware": [
        "Cinema 4D.exe", //main 3D Software (for input maps)
        "AfterEffects.exe", //post edit
        "Audacity.exe", //custom foley refinement
        "Premiere.exe", //sound design
        "Stable Diffusion", //AI-DLM [link]
           "Automatic1111", //webUi [link]
           "sd-webui-control-net", //AI-NNM [link]
           "depth-map-script", //depth map generator [link]
        "TopazGigapixelAI.exe" //upscaling

    ],
    "webpages": [
        "gaugan.org/gaugan2", //!no more available!
    ],
}  

/ / GauGAN (Beta) - early testing

/ / Jumping to GauGAN2 + Automization

/ / Experimenting

/ /  Starting the journey

//Lazy Load Videos