Analysis of the 2019 Performative Computation Symposium

Analysis of the 2019 Performative Computation Symposium

Thanks to co-curator Selwa Sweidan, I was invited to attend the 2019 Performative Computation Symposium.  Showcased in downtown Los Angeles, the symposium had three parts: the first primarily pertaining to Embodied Improvisation, Data Retrieval, and Performative Art; the second part dealing with Computational Arts, Engineering, Speculative and Applied AI; the third exhibiting various performances, screenings, and a reception (I didn’t attend the reception).  Both halves of the symposium had a lot of overlap in the different subcategories, so many of the presenters shared various theses even though their output tended to be different.

Moderators Sweidan and Roxane Fenton assembled a wide variety of artists and performers to present their computational and improvisational work to the crowd at NAVEL site.  The list included Alison D’Amato of USC, Sara Schnadt of JPL, Dr Joshua Gomez of UCLA, Alfred Darlington (Daedelus), Zena Bibler and Darrian O’Reilly, Jessica Rajko of ASU, Surabhi Saraf, Brian Getnick, Parag Mital, Shelleen Greene, and Catherine Griffiths of USC.

The symposium tended to focus primarily on the topic of dancing and performance art in the context of computation.  Being an architect and computational designer, I really couldn’t critique the conversations and presentations based on the nuances of Contemporary Dance theory, however I could appreciate some of the similar problems that I encounter as a designer trying to actualize my digital process into the real world.  For the most part, I probably won’t pontificate much on the predominate dance-oriented presentations, due to my exceptional ignorance on this matter.  However, I plan to convey some of my interests in the more-computational presentations showcased this day, and post some of my questions that I didn’t get to ask at the end.

I’ve been to or participated in many architectural symposia, and I’ve visited my share of interactive art and video conferences, but this is my first foray into dance and performance art exhibitions.  Not knowing what to expect, I kind of came into the process waiting to get a grasp at the context of the work being presented.

The first two presenters were Dr. Alison D’Amato of USC and Sara Schnadt of the Jet Propulsion Laboratory.   I didn’t really take a lot of notes during these lectures, primarily because I still didn’t know what I was expecting.  I mainly just took it in, waiting to get a grasp of the conference.

Next was the Head of Software Development and Library Systems at UCLA, Joshua Gomez.  Just by coincidence, when I arrived prior to the start of the event, I sat next to Joshua and struck up a conversation. Coming from the Getty Research Institute and now implementing in UCLA’s library system, he is developing methodologies and infrastructures for aggregating, filtering, and verifying the authenticity of meta data across various disciplines.  He described the problems with conventional data acquisition (DPLA only dealing with existing digital content, search engines limited to strings, etc), so he translated a semantic (subject/predicate/etc) character based system to URL’s as unique ID’s.  He’s implementing this “Linked Data” process and suggested a universal URL language across all participating API’s.

He did mention this method still had problems, notably how can the digital process authenticate valid artwork and its provenance.  Also, the project, even though it’s highly digital, still requires thousands of human-hours of work to digitize and verify provenance, citing a recent example of art forgery.

I caught up with him at the intermission and I asked him if there was any way the verification process could be automated.  I asked if there were ways to script the provenance check by using existing databases (like trusted wiki or other websites) to crosscheck the veracity of obvious forged information, like checking the date of acquisition/creation/sale against known artistic dates and places in history.  In addition it could use spectral and chemical analysis reports to verify if a certain material was known to exist at a certain time.  Also, I asked if Blockchain tech could help distribute the content to ensure that database information isn’t corrupted to support forgery in the future.

He said even with that, the process would still need a lot of human-verification.  The subjectivity of the work makes this hard to automate.

He mentioned in his lecture that ultimately, separate database holders would need to agree on a universal URL language or specification to allow for their respective API’s to aggregate the data for users.  I suggested maybe the problem isn’t having a universal URL spec, but maybe a universal mapping spec.  Much like mapping specs for structural steel catalogs across different AEC platforms, this could resolve the issue if different agencies don’t want to revamp their internet infrastructure to conform to a meta data search like Linked Data.

Overall, I really enjoyed Joshua’s discussion.

The next lecture included a musical performance by Alfred Darlington, AKA Daedelus.  He showed up with a device that I assume he fabricated, that had what he called wire “spaghetti” everywhere, and dozens of nobs.  He turned it on (it started glowing colors) he briefly described his process and proceeded to perform a small appetizer.

I appreciated his discussion about his artistic output.  He stated he was usually trying to tame the “cacophonic” mess that the device wanted to put out.  My architectural design process also involves battling the output of machine-made mess, so it really hit home.  I wanted to chat with him for a while after the exhibition, but he vanished as strangely as he appeared.

Next Bibler and O’Reilly performed a dance that utilized the whole room.  Not knowing really anything about contemporary dance, I’ll have to take their word for it.  At the end, they revealed their composition’s instructions, but I wasn’t close enough to examine the document.

The final presenter of the first half was Jessica Rajko of ASU.  I’m still not really sure what her lecture was focusing on.  I didn’t really gather a computational artistic approach, unlike the previous and following presenters.  Instead, she lectured on the prevalence of “dance” and “computation” in the peer reviewed record.  She seemed to desire credit for performers in scholarly publication and fixated on the lack of social justice ideology in technical papers exploring concepts of dance and performance.  She listed the journals publishing this type of work, and most were highly technical, so I wanted to raise the question on if something like SIGRAPH was the proper environment to insist on non-technical subjectivity.  Also, unless the performer was a co-author or co-creator of the concepts of the paper, I found it irrelevant to credit someone who merely executed an iteration of the performance piece.  To me, it would be like writing all the names of the contractors, electricians, masons, and plumbers on a building that I designed.  Unless they contributed to the design process, they don’t need to be named in a white paper on innovations of my building.

The first half ended with a feast of Mexican food (apt here in L.A.) and time to mingle.  In the neighboring room, there were projects exhibited for the symposium by various artists and designers.

Image and Sculpture by Ryan Schwartzkopf

Image and Sculpture by Ryan Schwartzkopf

The second half kicked off with a panel discussion with the first speakers, except Alfred.  While most of the topics were interesting, I liked Sara’s insights on her time at JPL, engaging with scientists and engineers who do not appear to value the artistic output she brings to the table.  To paraphrase, she said that while it might be instinctive to get offended when a non-artist disrespects our process, she suggested we try to build bridges instead.  I couldn’t agree more.  The worst thing we as artists can do is alienate ourselves from STEM aficionados with perpetual offense.  We need to bring value to data analysis and computational design, not insist it is valuable.  Also, Sara brought up if art really should need a lengthy preamble about the life struggle of the artist and their body of work, in order to appreciate the piece.  I think this also is a crutch that most contemporary artists lean on.  In a conversation I had with sculptor Ryan Schwartzkopf recently, we both agreed that the attention to the importance of craft and quality in art has taken a sideline.  Just visit the Broad and other contemporary museums, and most of the newer pieces have a larger “About the Artist” box than the artwork itself.  If art requires constant or exceedingly-lengthy explanation prior to enjoying the piece, then is the art really that noteworthy?

The panel wrapped up and the second batch of speakers began.  After getting a feel for the presented work, I was more comfortable examining the next speakers more closely.

Surabhi Sarah spoke about her founding and curation of the Centre for Emotional Materiality.  She presented a project satirizing the obsession of the smartphone by composing a performance where people walk slowly holding the isolated material components of an I-Phone.  Next she showcased animations of the “Awoke” project and other VR criticisms.

I had a lot of questions.  First, I get the initial impact of seeing the slow crawl of smartphone drones in public, however, I think the focus on extracted elements of the phone is misplaced.  Are the people obsessed with the phone and its respective isolated elements, or are they obsessed of the content flowing through the device.  How is this any different than the fixation of people obsessed with television in the 50’s-80’s.  The digital content is the thing that should have been extracted and situated in the hands of the performers.

Second, she mentioned something about VR provoking a kind of “blindness” on the agent, which I sort of agree.  I’ve never really thought the VR movement was that inspiring, and maybe AR has some value.  I’m not running out to get an Oculus anything because it’s mainly a fad. However, I’d argue the headset is more of a prosthesis rather than a form of blindness.  Especially in a place like the Centre of Emotional Materiality, I would think they would employ VR more as a prosthetic rather than a mask.  Also, the Awoke project employs a flat screen, so how is that project not subject to the blinding or obscuring the viewer?  Overall, I thought of that conflict when the next project appeared.

The Awoke project has this virtual blob that constantly mutates.  It’s really interesting, and it reminds me of my initial exercises in NURBS mutation back in 2004 and 2005.  I had a similarly-shaped blob that was always under constant local and global mutation.  I used it as the basis for particle mutation in my Master’s Thesis and it’s the kernel of all my numerical and shape mutation still to this day.  I wanted to ask her how she programmed her Awoke mutation because some of her geometry didn’t have a lot of the blebs that my early iterations had.  I suspect it was poly-based, but it was hard to tell from the projector resolution.

Overall, her projects never gave me the connection from the data to the form that I need to understand how “emotion” and “materiality” had a connection.  Was she focusing on how human emotion translated to digital materiality?  Or was this a new form of computational materiality?  If yes, then how do you define that materiality and does she have a taxonomy of digital emotions established for our understanding?  I would have loved to ask her, but we ran out of time.

Next project was the PAM site curated by Brian Getnick.  As an architect, I was intrigued by Brian’s lecture.  He really didn’t have much to say directly about computation, but his focus was primarily about how architectural space and programming facilitated his PAM project.  His entire premise was spatially based, contrasting his intimate performance space against the convention of a large concert hall.

I wanted to ask him in what specific ways his space provided a better setting for dance than a stage.  He mainly just asserted it was, but I would have liked a few examples.  The lectures were kind of short, so maybe he just didn’t have time to show a video or two.

Also, it would be interesting to hear how some projects failed in that space.  Were there any projects that in hindsight, he thought could have used a different spatial layout?  Were there any that in hindsight would have better on a stage?  Could he establish a spatial/programmatic taxonomy for future dance curators to exhibit their productions and release that information in a book or video?  Architecturally, this would be a valuable resource when designing a contemporary program; people tend to default to “open plan” when it comes to this, because there are no resources delineate how space should be allocated.

Soon after, Parag Mital presented some beautiful data visualization projects and image/video translation work.  I had seen some of his work in the past, so there wasn’t much of a surprise.  He reminds me of when I met Robert Hodgin (of Flight 404 fame), both of them having an instinctual ability to translate numerical and string data into elegant palatable userfriendly interactive artwork.  If we had time, I was mainly going to ask how he aesthetically selects his XYZ, time, and image fragmentation locations, but I already know the answer.  I could hang out with him for a while.

Next, Shelleen Greene presented a project about an animatronic android face and head named BINA48.  I had a hard time connecting what she was saying with what she was projecting on the screen.  It wasn’t so much as a lecture, but rather a recitation.  Maybe she wanted to be very selective of the words she used, so she recited her text very distinctly, almost (ironically) robotically.  It wasn’t until the videos of the BINA48 robot and the internal AI speech, that I was able to gather the crux of the premise.

Greene spent the majority of her lecture attempting to dovetail AI and intersectionality.  The videos of the BINA48 robot showed a machine covered in a flexible latex skin, coincidentally of brown tone.  The robot was designed after a real woman and there was a rather lengthy intersectional backstory.  Overall, the videos showed some of the various AI conversational failings, which was mildly amusing to the audience of computational artists.

The robot is half humanoid android, half AI software speaking through the mouth speaker.  When a real human would interact with it, for the most part, it obviously failed a standard Turing test, so that made the presentation kind of difficult to follow.  If this robot cannot pass as a generic human, then how can it be nuanced enough to express any form of intersectional specificity.  By extension, the racial component to the presentation seemed a non sequitur.  I, and I assume most of the audience, was likely visualizing a form of intersectional Turing test as the video continued.  I wanted to ask Greene, if the AI software didn’t have the Hall-of-Presidents Disney-esque animatronics, and just communicated via text message, would her intersectional and racial argument still hold water?

Computational image processing by Catherine Griffiths


Lastly, Catherine Griffiths of USC Cinema School presented various interactive visualization projects.  I also am fascinated with Cellular Automata (nearly all my projects have some form of CA, even my home page has had a CA artwork on it for a decade now), and Catherine kicked off her presentation with various integrations of CA and GOL animations overlapping images and video.  Also, she developed elegant interactive representational programs that she developed at her lab.  These utilities express data sets and relations for non-technical people to understand varying nonlinear configurations.   She and I share a lot of concepts in our programming, notably the utilization of emergence to generate patterns and form.

Computational image processing by Catherine Griffiths

She ended her lecture trying to discuss how some biases can be visualized in her tools, but didn’t go in depth.  She alluded to how a “small bias” could lead to an emergent phenomenon, identifiable on screen. I would have asked her if and when she found biases, how did she identify the emergent growth, how did it manifest itself, how did she relate that back to the bias, and how did she make the connection that an improper bias could result in a particular behavior?  Were there outliers or clusters or forms or movement that was extraordinary?  Did she build up an intuition from constant use of her tools, that allowed her to see abnormalities?  Did she establish a dictionary of tool behaviors, allowing general users to understand the system structure?  That’s sort of the problem with emergence; a small rule set leading to counterintuitive global behaviors.  If the output is counterintuitive, then how can anyone genuinely assert that is the result of aggregated bias?

The day ended with another panel discussion.  I wanted to ask all these questions, but I felt it would have bogged down the Q&A.  Also, we were running out of time, so the moderators rightly asked general questions instead of pointed one, which was good.

Overall, it was a good symposium.  The diversity of processes and artists was really interesting, and it should have gotten more exposure.  I would have liked to see this in a lecture hall with a hundred more people.  You don’t get to see this kind of cross-disciplinary interaction much.

If you attended and have some comments, feel free to mention them below:

About the Author

Nicholas Pisca: Founder 0001d LLC; Former Technical Manager Gehry Technologies; Former Lecturer/Adviser/Faculty UCSB MAT/USC/SCIarc; Author YSYT; Editor 0001d BLAST;