Cenotaph II menu: ☜ Back to Index

Cenotaph II

At the beginning of the pandemic isolation I put together a short looping animation called cenotaph. It was a wireframe cube filled with ‘projections’ of short animations I’d been making in the iPad application Looom. The projections would seem to pass through the boundaries of the cube at times. I generated a subtitle track algorithmically using the python library Markovify and a corpus of writing losely around the themes of interiority/exteriority and imaginary architecture (Like Boullee’’s Cenotaph for Newton. In some ways it was the one-person, no budget, isolation version of Shapes and Other Shapes- an imaginary space narrated through a non-deterministic text.

I’d like to make another imaginary space to host a performance. I’m interested in using VR motion capture using Glycon3D to record myself reading another generated text. If I can, I’d like to present the finished piece in a 3d space. This could become a working process for making VR-enabled movies of a character in a space that could be used in my Grotto project or could exist on their own. I’m interested in playing with embodiment here through a performance rather than a game.

An unaddressed concern from the first experiment- a cenotaph is a tomb, as for a missing body. Who was the cenotaph for? Who is this new cenotaph for? This will be an opportunity to have an animated death mask eulogize itself, as in Susan Silas’ Eulogy. So, in this imaginary space, who do I want to commemorate but lay to rest? How would they respond to being commemorated? How does this relate to the fact that real life cenotaphs are almost always for the war dead? A Cenotaph enshrines a version of history in architecture in rememberence of the dead. Whose dead? Whose history?

Documents:

cenotaph2_mood

cenotaph2_screens


Videos:

Audio Files:

Posts:

Cenotaph2 clip 1

After problems with an export bug in Glycon3d (that the developer says will be resolved in the upcoming version) and some problems with rendering, I was able to make a standard clip of the first portion of the piece, which is added to the videos section of the progress page.

More ➜

text

I fed a few texts on different subjects into my python markov chaining script. I generated a few hundred lines at a time and then I pick through them, adjusting them and putting them in sequence that makes sense to me. I like to write this way because it feels more like gardening than writing, because it feels like I am writing with a partner, because I cannot speak for the dead or speak with authority, but I can do this- hold a seance.

More ➜

glycon3d

I spent some time today playing with glycon3d animation in cinema4d (I’m relatively new to both). It’s an impressive piece of software that allows a vr headset and your hands or controllers to record mocap animation information and export it as an fbx of a mixamo-like skeleton. I’m still trying to use the animation fbx though. The skeleton it exports uses standard mixamo joint names, so I thought maybe I could use a retarget tag to add the animation information to my existing mixamo-rigged model, like so. but a simple retarget ended up doing this, which looks like a skin glitch. glycon glitch

More ➜