Chris Funkhouser is a new media poet and scholar whose work has explored digital poetry, hypertext, Moos, and
digital performance, among other computer-mediated literary forms.
His evocative digital poetry and authoring software-mediated songs have been performed and/or exhibited internationally including at the Banff Center, E-Poetry 2009,
Interrupt, Unnameable Books in Brooklyn, NY, and the 4th International Conference & Festival of the Electronic Literature Organization.
He has been a Digital Poet-in-Residence at the Bowery Poetry Club in New York City, and in 2006 he was awarded a Fulbright Scholarship
to work at the Multimedia University in Cyberjaya, Malaysia.
Chris Funkhouser is an Associate Professor in Humanities
and Director of the Communication and Media Program at the New Jersey Institute of Technology.
His books include New Directions in Digital Poetry (NY: Continuum, 2012)
and Prehistoric Digital Poetry: An Archaeology of Forms, 1959-1995.
(University of Alabama Press, 2007)
In his statement for Authoring Software, he describes how his recent sound and projection "songs"
were created from vast databases of related words and phrases -- using Eugenio Tisselli's MIDIPoet software
for the real-time composition and performance of interactive visual poetry.
For instance, in "ELO Song", performed in Providence, Rhode Island at the Electronic Literature Organization's 2010 Conference,
striking notes on the bass guitar produced short texts from extensive databases of Internet
search results. Words and anagrams for Providence, Rhode Island were seen on the projection screen, sometimes in
fixed locations and sometimes in random positions.
About a series of works he performed at Unnameable Books in Brooklyn, he observes in his
statement that "Variation, particularly in these four works, is lively and
effective since the appearance of each word is directly connected to the musical rhythm, making
perhaps the synchronization between even more recognizable and more powerful in effect
-- as the sound moves, the words moves."
To find out more about Chris Funkhouser's work visit
http://web.njit.edu/~funkhous/
Chris Funkhouser: Making MIDIPoetry
An informal article I wrote for the netartery group blog titled
"On using tools made by comrades"
narrates the process of construction of a few recent works I produced using programs or applications devised by other artists practicing in the field of
digital poetry. Experiments with Jim Andrews' dbCinema, Charles O. Hartman's Pyprose, and the
Global Telelanguage Resources (GTR) Language Workbench (Andrew Klobucar/David Ayre) are divulged, as are my preliminary efforts using Eugenio Tisselli's MIDIPoet,
which I recall and expand here.
I learned about Tisselli's program in 2008, when he and I participated in a literary arts festival at Brown University.
(
Interrupt). Tisselli used MIDIPoet to propel a digital
poetry performance (featuring graphics, text, and gesture) with a mobile phone -- an approach to presentation he also used at E-Poetry 2009 in Barcelona.
Having known about MIDI-based art since the mid-90s, when friends of mine studying with George Lewis at Rensselaer Polytechnic Institute's (RPI) iEar program were coordinating sound
and video through MIDI, I was intrigued that a digital poet had engineered such a tool. I used MIDI once before, when an audio engineer who produced
some recordings of mine at Multimedia University programmed a keyboard to collect samples of my voice reading lines, and I made a sound poem with them
(see
http://web.njit.edu/~funkhous/selections_2.0/content/mmu.html). Tisselli's arrangement at Interrupt, which allowed for the integration of
textual components, planted a seed in my mind, a potential direction to explore at a future date. In 2010, partly due to stagnation I felt occurring
in my animated works, coupled with a sense of mediocrity in performance I sensed in many -- but by no means all -- digital performances,
I began to fashion works using MIDIPoet.
Deciding to use MIDI was easy; figuring out how to do it was not. Before making work, I had to determine what instrument to use and how
to effectively deliver sound signals into the computer/software to produce results. I had an external sound card to connect to a laptop
and needed to find a way to convert sound to MIDI. My options for (readily available) audio sources were voice, electric guitar, bass guitar,
and canjo. Due to technical complexities, voice and canjo were eliminated as possibilities, and since an inexpensive pitch-to-midi converter
made for the bass is available, I decided to try it out.
MIDIPoet contains two components, Player and Composer. Works are made in Composer (e.g., by loading data and setting parameters)
and are projected through the Player. Use of the Player is not difficult, as long as you assign the proper input port information into the
Player and follow the correct order of operations; a relatively straightforward multi-step process including locating, selecting, importing,
and playing the proper file is called for. Another matter that arose involved the assignment of an unidentifiable setting somewhere within MIDIPoet
that projected the sound of a piano when hardware controls were in certain positions -- an aspect of output I advantageously used to create
sonic cacophony in two of my early compositions.
I came up with some ideas for compositions, collected texts to use as data, composed basslines, and devised kinetic screen properties to
implement using MIDIPoet Composer. The process of implementing these ideas to the screen primarily involved importing text into
the program's data repository, (i.e., pasting text into a field accessed via pull-down menu) then making adjustments to parameters
responsible for placement and coloration of the text within Composer. With assistance from Tisselli, who coached me in using
the program by sending a generic MIDIPoet (.mip) file and, without further instruction, advised me to manipulate his pre-programmed
settings. Having this basic input, encouragement, and support was invaluable to the process of working with, and making discoveries
inside, the program. Within a day, I had managed to configure the MIDIPoet aspects of the "Song" I presented at the
The 4th International Conference & Festival of the Electronic Literature Organization, (ELO) which involved acquiring anagrams and "candidate words" for the word
"Providence", solving issues of harmonics (string vibrations from the bass) and establishing particulars of text placement
(including angle) and coloration. This part of the process involved studying the size of the words, establishing a range of
placement for them in appropriate relation to the screen's x-y coordinates, and identifying the R(ed), G(reen), and B(lue)
text attributes. (which I did by referencing a PhotoShop color menu)
Having a grip on how the program worked, I explored further and fairly quickly came up with some more ideas for MIDI
poetry bass-as-trigger formulations. I prepared a second anagrammatic piece -- in some regards simpler in its dynamics
because placement of text on the screen was consistent, but also more advanced because the color of the text gradually
(automatically) cycles through a range of hues -- which was eventually presented at (and dedicated to) the ELO festival.
The ferment of ideas led to a third piece, in which the projected language scrambled and varied the read and "played" simultaneously
placement of fragments of the name of the style of poem. (i.e., Flarf) This required only minimal
adjustments (replacement of text, change of color) to be made to my initial piece.
In my first MIDIPoet compositions, I realized I should take advantage of the way the musical (string) harmonics
could play a role in the delivery of output. To my ear and eye, I was able to create this type of "rolling thunder" of sight
and sound, depending on how I held, touched, and positioned the guitar itself. For example, if I shook the guitar forcefully
enough to make the strings move (i.e., without actually plucking a string), I could actually make many more notes and textual
transitions occur than I could by touching the strings. Another of the happy accidents I discovered using MIDIPoet was
learning that with the simple adjustment of a knob on my sound card I could make the bass sound also like a distorted piano.
As mentioned above, I exploit this function in the first two pieces I presented at ELO
(see
http://www.youtube.com/ctfunkhouser#p/a/u/1/L5kffUObtN0 )
-- although it should be mentioned that thiis effect subsequently, mysteriously, became disabled. In the song I sang at ELO, in which
the bass sounds only as bass, MIDIPoet randomly selects and randomly places on the screen one of the lines stored in the database each
time a note is played -- and clarity of playing makes a significant difference to how the output appears. The primary ingredients of
this particular projection are described alongside
documentation of the performance.
My most recent compositions mainly involved making adaptations to constructions I have already completed.
Preparing to perform on a bill with Alan Sondheim at Unnameable Books in Brooklyn four months after my initial
experiments, I decided to prepare some pieces using MIDIPoet, and attempted to explore new aesthetic grounds
with the program. Tisselli and I began the process of collaboratively engineering projections for one of my
songs that involved the rendering of visual effects on images imported into MIDIPoet, but were unable
to complete the task because of responsibilities he had in the Sahara at the time.
( see
http://netartery.vispo.com/?p=523)
I made some files that worked, but due to my inexperience with the program, they were essentially limited to rendering a
randomly drawn slide-show of images, whose appearance was triggered by the playing of notes, but the end results
were not yet compelling enough. There are ways of bestowing graphical effects on images, but the process of doing
it is not straight-forward -- the user cannot simply identify patterns of visual fades and automatically apply graphical
filters. This aspect of the program is under-developed, to say the least, and is probably a good a reason as any for
someone seeking to work in this area to use a more advanced program. The three pieces I did prepare and
perform featured modifications of parameters and techniques of the sort previously described.
What actually happens with regard to the presentation of text? Two types of events happen in my MIDI
poems thus far. A randomized series of names and words are anagrammatically presented on the screen,
their appearance activated by the striking of a note on the bass. Successions of words/phrases are either
fixed in a specified location or presented in a succession of randomized positions; the color of the language
is either constant of shifts within a range specified within the software. What do the texts in the database
consist of? In the MIDI portion of "ELO Song" the database consists of 1,738 anagrams and candidate
words for Providence RI. (a total of nine words were removed from results acquired at the Internet
Anagram Server) In "Creating Recolonization Literature" the 83 three word anagrams for Electronic
Literature Organization are permuted along horizontal center. "Far", the Flarf piece, employs four
candidate words for flarf (Far, La, Fa, A) and one phrase (A Far). "Vulnerability" uses 1,447 anagrams
(reduced from an original count of 6,215 anagrams and 560 candidate words). The database
for "Brooklyn Song" contains 33 lines. (all of the 31 candidate words, one anagram, and the title
word) "Alan Sondheim" -- largely adapting the parameters of "Creating Recolonization Literature" --
also appears at horizontal center, containing 1,641 lines chosen from the 28,724 anagrams and 647
candidate words made available through the letters of his name. Positioning of the text is randomized
in four of the six pieces. Variation, particularly in these four works, is lively and effective since
the appearance of each word is directly connected to the musical rhythm, making perhaps
the synchronization between even more recognizable and more powerful in effect
-- as the sound moves, the words move.
As mentioned in the netartery article, I was surprised to learn from Tisselli that only a few
other artists (namely Mitos Colom and a VJ collective called Telenoika) have seriously
engaged with MIDIPoet. The concept of using a musical/sound interface to push digital
poetry into sensible new aesthetic realms seems somewhat important to me. Tisselli's
freely accessible and not-terribly-complicated to use program offers media writers a
convenient way to begin exploring this area of the discipline, and I hope to eventually see
more experiments by others.
To find out more information about
MIDIPoet on Authoring Software, see
http://www.narrabase.net/#eugenio
To download a copy of MIDIPoet, go to
http://www.motorhueso.net/midipeng/