HALzheimer: Watching an AI Slowly Forget – Portrait of Weightless Girl
Man and machine are not very different
In Portrait of Weightless Girl (2021), Frederik De Wilde constructs a poetic and unsettling meditation on artificial intelligence, memory, and forgetting. Made with custom algorithms for destroying weights in deep learning models, the work explores the fragility of neural memory by introducing a process of algorithmic degradation into a Generative Adversarial Network (GAN). As the network forgets, the synthetic portrait of a girl—a computational hallucination—dissolves into a monochromatic haze of disappearing pixels. This slow decay transforms the AI system into a metaphor for cognitive decline, raising profound questions about the impermanence of memory, the humanization of machines, and the politics of forgetting.
The series’ title—HALzheimer—evokes HAL 9000 from 2001: A Space Odyssey and Alzheimer’s disease, fusing cinematic reference and neurological deterioration. In this fragile, machine-mediated portrait, the neural network’s failure is not just aesthetic but conceptual. De Wilde suggests that artificial systems modeled on human cognition are susceptible to the same vulnerabilities that define us—namely, the slow erosion of memory and the collapse of coherence. The AI becomes both a tool and a mirror, reflecting the limits of knowledge and the instability of identity.
A Mirror of Impermanence
De Wilde’s portrait becomes increasingly ghostlike. At first, the AI-generated face ages—lines appear, hair thins, and the gaze becomes vacant. Later, the face appears to melt, colors blend and blur, and the portrait slides into abstraction. This transformation recalls the evolution of Monet’s palette late in life as his vision failed—greens and yellows took over. In both the biological eye and the artificial neural net, decay is incremental, difficult to perceive as it happens, and often irreversible.
Yet this artwork does not mourn memory loss; it inhabits it. HALzheimer asks what it means to forget—not just as humans, but as societies, as algorithms. If memory is a tool of power, forgetting becomes both an act of liberation and a mechanism of erasure. The colonial legacy of enforced forgetting—of indigenous knowledge, suppressed histories, erased identities—echoes in the machine’s algorithmic amnesia.
This dialectic of memory and forgetting resonates with Hegel’s master-slave dynamic, where control is always accompanied by fear of loss. In the machine’s deterioration, we witness the fragile scaffolding of mastery itself—our need to control and contain knowledge, now destabilized by a network that resists permanence.
Can machines forget? More importantly, what do we project onto their forgetting? De Wilde’s HALzheimer series is not a dystopia, but a slow unraveling that makes space for reflection—on technology, mortality, and the poetics of cognitive entropy.
Algorithmic Degradation in StyleGAN2: A Study of Neural Network Destruction
This speculative vision of memory decay is grounded in an experimental research and development process conducted by Studio De Wilde. In this R&D project, the effects of systematically degrading neural network weights within the StyleGAN2 architecture were explored, with a particular focus on convolutional and dense layers. The aim was to analyze how various strategies of network destruction would manifest visually in the generated outputs.
Four distinct destruction algorithms were devised and tested:
-
Algorithm 1: Destroys convolutional weights using a smooth degradation algorithm.
-
Algorithm 2: Destroys all network parameters using a smooth degradation algorithm.
-
Algorithm 3: Destroys convolutional weights using a binary degradation algorithm.
-
Algorithm 4: Destroys all network parameters using a binary degradation algorithm.
Figures generated by Algorithms 2 and 4, which involved setting all network parameters to zero, resulted in uniform grey outputs. This outcome is due to the fact that zeroing all parameters eliminates all signal propagation in the network. Given the [-1, 1] color range used in GANs, a zero value outputs grey, indicating a complete absence of visual information.
By contrast, Algorithms 1 and 3, which preserved biases while destroying convolutional weights, produced more complex degradations. These outputs reveal that even when the convolutional feature extractors are nullified, the network’s residual biases still generate some form of signal—albeit distorted and non-representational. This underscores the importance of convolutional layers in shaping generative coherence, while biases alone are insufficient to produce meaningful image structures.
Surprisingly, the destruction of dense layers did not yield any noticeable change in the generated output. Despite repeated verifications, the absence of visible alteration remains an open question, prompting further investigation into the differential roles that dense and convolutional layers play in high-resolution image synthesis. These insights emerged from training and modifying the StyleGAN2 models via a custom Colab-based workflow, which allowed the team to compute projections and document frame-by-frame degradation across 200-frame sequences.
A final dataset was compiled comprising the original source sequence, the GAN-projected sequence (without degradation), and 48 degraded sequences resulting from the four algorithms applied across twelve variants. In total, the study includes 4 algorithms × 12 sequences × 200 frames, providing a granular view of the machine’s cognitive unraveling. A summary table showcases the degradation of a single reference frame—frame 00033—across all permutations.
This empirical research reveals both the vulnerabilities and surprising robustness of deep generative models. It contributes not only to De Wilde’s artistic narrative but also to the broader discourse on AI interpretability, robustness, and failure modes. The destruction of neural networks is thus not simply an act of aesthetic experimentation—it is also a philosophical inquiry into the foundations of synthetic cognition.
- John McCarthy (1927-2011), American computer scientist and founder of the term Artificial Intelligence (AI), described the term as follows: “The science and engineering of making intelligent machines, especially intelligent computer programs.” Artificial (made by people) Intelligence (the power of thinking) is the study of machines that can feel, make decisions and act like humans. They are able to acquire knowledge and skills, to understand changing situations and know how to deal with them.
- Machine Learning is an efficient way to give an algorithm the necessary tools to learn something. An example of Machine Learning is Facial Recognition. Here, faces are assembled by Generative Adversarial Networks (GANs), a type of neural network that learns from existing photos to produce, for instance, new realistic yet purely fictional faces. The interconnected neurons of the technological network determine the features: the eye color, skin color, shapes, hair color… in the same way that a human brain uses a network of neurons to construct a mental image of a face.


