The Basic Process:

The images, objects and sounds I’m developing are all derived from legitimate empirical data. As illustrated on the right, the process begins by recording a “polysomnograph” (or PSG). Minimally a PSG will measure electroencephalographic (EEG) or “brain wave” activity, eye-movement (through electrooculargraphy or EOG), and muscle tone. Often they will also collect data on heart activity (EKG), temperature and respiration (since one of the primary medical applications involves helping those with sleep apnea). These PSG reports are then reviewed by experts (or increasingly by machines) and the different stages of sleep are identified in what is known as a “hypnogram“. The hypnogram shows when, for example, a subject has entered REM sleep, or when they are in “slow wavelength” deep sleep.

The PSGs I use are either taken from semi-publicly available sleep-science databases using anonymized subjects, or recorded myself using my own biosensing equipment (see below). I then review the data to find a suitable period in which the subject is experiencing REM sleep. (NB: It is a mistake to equate REM sleep and dreaming, but we can say that people in a REM state are dreaming roughly 80% of the time. Read more on this important issue in sleep science here). Quite often I select the final dream of the night, as it is in the morning that our dreams are often the longest and most interesting. I use the open source “EDF Browser” to open, crop, downsample and export the files. It is pretty robust signal processing software made by Teunis van Beelen and while it is designed for people who are already initiated into the wonders of “bandpass filters”, I’ve managed to figure at least some of it out. In the future I hope to wrestle with  “MATlab”, which is the sine qua non of the neuroscientific signal processing world. 

I think many artists who work with EEG data use techniques that directly connect the brainwave input to audio or graphic processing software. Either they themselves have computer programming backgrounds, or they have the assistance of very clever programmers and tech people to help out! I have no formal objection to such projects, but for this particular work I’m largely interested in the process of discovery and experimentation itself, and in figuring out what I can achieve given my fairly limited, learned-it-from-watching-youtube skillset. So, at least for the time being, I take the raw data and import it into good old Microsoft Excel. Here I subject the numbers to various algorithms I’ve developed, and use the results to create the foundations of musical scores, data visualizations using the “Processing” language, or much more low-tech experiments with watercolor paint.

My own polysomnography:

Only recently has technology developed sufficiently to allow average consumers access to devices that can do things like measure brainwaves. Perhaps it would be overstating things to call such tech a “fad,” but it seems public demand for such devices is growing as people want to learn more about their brain activity and/or engage in various “neruofeedback” exercises. Many of these “brain computer interfaces” or BCIs seem to be quite user friendly right out of the box, but are designed to assist with meditation and neurofeedback applications rather than sleep study. In spite of their ease of use, the scientific credibility of such devices remains, it seems, a bit up in the air among the pros (more discussion here).

I’ve opted for a somewhat less user-friendly but rather more robust device: a “Cyton” board made by the good people at OpenBCI. This device, while having a somewhat steeper learning curve and is quite a bit fussier to use, allows for all kinds of biodata recording so that the EEG, EOG and EMG required for a proper polysomnograph can be captured. Compared with other devices, this is clearly the more accurate instrument available at a consumer level, and I like that it is all open source and community driven. 

The goal of my evolving process is to capture three EEG readings, at least one EOG reading, and an EMG reading from the chin. This allows me to create my own hypnogram, identify a period of REM sleep and follow the same procedures I use with the anonymous data. I am thoroughly documenting the process of figuring out how to do all of this in the Blog section, and consider this process of discovery and its documentation to be an integral part of the “artwork” that is the DRL as a whole.

For more information about the general science of sleeping and dreaming, the processes involved in this project, or advice on some good books to read, please refer to the slowly expanding blog section!

Step 1: A sleeper is connected to a biosensor (NB: Mine is much smaller!)
Step 2: Brainwaves and other biodata is collected
Step 3: The raw data is exported into usable form
Step 4: The data is transformed from the jagged lines of step 2 into... something else!
The Open BCI CYTON biosensor.