Tearable Dreams

Sketch: https://editor.p5js.org/nkumar23/sketches/GoX7ueD-x

Github: https://github.com/nkumar23/tearcloth2D

Original Plans

I originally entered this final hoping to create a sketch that visualized the quality of my sleep and was controlled by a muscle tension sensor I had already built. I wanted to visualize the entirety of my night’s sleep as a continuous fabric slowly undulating, with holes ripped through it when I clenched my jaw -- detected by the sensor.

I drafted up an initial roadmap that looked like this at a high level (sparing you the detailed steps I created for myself):

  1. Create the fabric using ToxicLibs and p5

    1. Adding forces and texture ripping to Shiffman’s example

      1. Start with mouse/key interactions for rip and forces (like wind/gravity)

  2. Create a clenched/not-clenched classification model

    1. Collect and label sensor data for training

      1. Create simple interface in p5 to record clench/not-clenched with timestamp so that I can label sensor values

      2. Train ml5 neural net using labeled data

  3. Link fabric ripping to sensor data

  4. Run this in real time and capture video of animation during sleep

I headed into this project knowing that I’d most likely tackle chunks 1 and pieces of chunk 2, but likely not the entire thing. Along the way, I wound up focusing entirely on the cloth simulation and abandoned the rest of the project for the time being. I added some audio interaction to round out this project in a way that I thought was satisfying. 

I’ll describe the process behind building this cloth simulation, and what the next steps could look like from here.

Inspiration

As difficult as dreams are to remember, I have short, vivid memories of an undulating surface, slowly changing colors, that at times looks like a grid from the matrix, and at other times looks filled in. As I began thinking of how I might want to visualize my sleep, I went immediately to this sort of flowing fabric-like image.

Luckily, as always seems to be the case, there is a Shiffman example for cloth simulation! I also found this example of a tearable cloth in native Javascript.

Process

I started with Shiffman’s 2D cloth with gravity example. He did the legwork of importing the Toxiclibs library and aliasing some important components, like the gravity behavior.

Rip detection

From that starting point, I first wanted to figure out how to rip the cloth. To begin, I calculated the distance between a clicked mouse and a particle and console logged the particle coordinates when the distance was less than 10 (i.e. which particle was clicked?).

https://editor.p5js.org/nkumar23/sketches/4NsLvcVlO

Rip detection, but make it visual

Next, I wanted to see if these particles were where I expected them to be, and nowhere else. To do this, I displayed the particles rather than the springs and made them change color upon click.

https://editor.p5js.org/nkumar23/sketches/cz2XdRYfH

Spring removal

Then, I wanted to remove springs upon click -- which would “tear” the cloth. To do this, I had to add a reference to the spring within the particle class so that specific springs could be identified upon click.

I spliced spring connections and decided to stop displaying the spring upon click, which led to this aesthetic:

https://editor.p5js.org/nkumar23/sketches/-dtg2SCCy

That doesn’t really look like a cloth was torn! Where’s the fraying? This behavior happened because the springs were not removed from the physics environment-- they were just no longer displayed. 

Removing the springs required adding a bit more logic. Instead of adding springs in the draw loop, we now have the framework for adding/removing springs within the Spring class’ logic. 

In the draw loop, we check whether a spring is essentially marked for removal upon click and remove it via the remove() function created in the Spring class. Thanks Shiffman for the help with that logic! 

https://editor.p5js.org/nkumar23/sketches/uykP2GyE2

Adding wind + color

Now that the spring removal creates a more realistic cloth physics, I wanted to add another force to gravity - wind. I wanted to simulate wind blowing through the fabric, creating an undulating, gentle, constant motion. But I did not want the wind to blow at one consistent “speed” -- I wanted it to vary a bit, in a seemingly unpredictable way. Perlin noise could help with this. 

I brought in a “constant force behavior” from Toxiclibs and created an addWind() function that incorporated changes to the parameters of the constant force vector based on Perlin noise. 

Next, I wanted to add a similarly undulating change in color. As I looked at ways to use Perlin noise with color, I came across this tutorial from Gene Kogan that had exactly the kind of surreal effect I wanted. Here’s the implementation of everything up until now + wind and color:

https://editor.p5js.org/nkumar23/sketches/ugZc9Sry7

Adding sound

At this point I had a visualization that was pretty nice to look at and click on, but seemed like it could become even more satisfying with some feedback upon click-- maybe through sound! I added a piece of music I composed and a sound effect I made with Ableton Live -- the finishing touches. Check it out here -- the same as the sketch at the top.

https://editor.p5js.org/nkumar23/sketches/GoX7ueD-x

Next Steps

I would like to add some physical sensors to control parameters for this sketch — things like the way the wind blows, ways to stretch the fabric, rip it in the physical world, etc. I’m not wedded to using the muscle tension sensor anymore, though!

I’d also like to add more thoughtful sound interactions. Perhaps there are different interactions depending on when your rip the hole, where you rip it, how much of the fabric is left.

More broadly, this assignment made me want to explore physics libraries more. It is pretty impressive how nicely this cloth was modeled with Toxiclibs’ help; there’s a whole world of other physics library fun to be had.

SuzukiMethod

Assignment: Create a piece of music that uses stereo output

Idea and Process

I decided to start using Ableton Live for this assignment after years of working in Logic Pro. Although switching DAWs is not like starting from scratch with a new instrument, it still takes time to get up to speed with the logic (no pun intended) of the new software. Not only do you need to re-discover where the features you’re used to are located, but build an intuition for what the new software’s opinion of music-making and workflow is. Especially for these first few days of working in Ableton, I feel like — and will continue to feel like— a beginner again.

I started playing violin when I was 3. I took lessons with teachers who followed the Suzuki method; an opinionated approach to teaching music that emphasized learning through playing by ear. Reading music could come later — training the ear to hear music and repeat was more important, just like the early days of learning a spoken language. I would go to group lessons, stand with a bunch of other kids with my violin and repeat notes over and over, focusing on bow technique, then pitch, then phrases, until eventually I could play songs.

Around the same time as I was learning to play violin in the early 90s, my older brother (also a violinist) was listening to a lot of New York rap when we weren’t practicing classical music. The Wu-Tang Clan was at their peak, and their prodigal producer, the GZA, released Liquid Swords. I knew a lot of the lyrics, even if I didn’t know the meaning, of the eponymous title track off of the album; it’s remained one of my favorite + most-quoted songs of all time.

For this assignment, I wanted to start the process of switching to Ableton, knowing full well that it might not be pretty initially. I will likely use Ableton as we move to exploring 4+ channel output, so I wanted to start to develop a workflow to work in stereo-and-beyond.

I chose to sample audio from a Suzuki lesson during which Suzuki instructs kids on the bow motion needed to play Jingle Bells. I also took the drums from the end Liquid Swords; in typical early-90s fashion, it’s repetitive and only really has a kick and snare. Working with these samples is a reminder that it’s going to take some time to have my output sound the way I want it to; I need to practice + have patience.

I focused on learning how sampling works in Ableton. I chopped the audio from Suzuki and Liquid Swords and mapped it to my MIDI controller; it’s easier to do and more fully-featured than in Logic. I played with random panning and spread on the Suzuki vocal samples, two features that are immediately accessible in the sampler, but would have taken work to access in Logic. I was also easily able to take the violin samples and map them to a scale using the transpose pitch feature.

Screen Shot 2020-03-02 at 1.41.26 PM.png

I also started to work with the EQ effects. I know I have a lot of work to do to make the mix sound good, but I started by trying to place the drums, violin samples, and vocals in different parts of the stereo mix. I then used the EQ effect to cut or boost different parts of the frequency spectrum for each of the sounds. I also created a mid-pass filter at the beginning to create some contrast on the drums/violins/vocals before the beat really comes in. The EQing could use a lot of work — I’d love to get some tips in class for how to approach EQing.

Ultimately, I wanted to create a groove that you could bop your head to and maybe even have a friend freestyle on, while trying to get sounds to have a clear sense of place within the space inside the mix. The violins were meant to gradually get more spacious and disoriented as I applied more echo/reverb. I’m not sure it fully succeeded in any of these goals: the piece has a bit of motion as different drums come in, but could stand to have a bit more variation, and could definitely have a cleaner placement of sounds. The violin volume levels aren’t quite right yet. I’m still happy with getting off the ground and making something I can build off of. Looking forward to spending some nights getting lost in learning Ableton and getting better at production!

Sound in Space Mono: Prepared Violin

Assignment:

Create sounds that play from a single (mono) output

Idea summary:

I created a “prepared violin” by attaching various resonant objects to the strings of my violin. The resulting sounds added new dimensions to the texture and harmonic quality of the traditional classical violin output. The violin itself is the mono output source.

Background:

This semester, we will be gradually building towards performing on a 40 channel setup. As we progress through increasingly more complex output arrangements, I will undoubtedly build up my understanding of technical tools like Max/MSP and get deeper into the weeds of DAWs like Ableton/Logic. For this first assignment, instead of jumping into those tools or arranging a piece of music, I wanted to start analog.

A mono output source can be anything from a single speaker to a single human voice. As a violinist for most of my waking life, albeit an infrequent one these days, I decided to use this assignment to dust off the strings and treat my violin as my mono output. Before diving into the details — one question to ponder: is the violin itself the mono output or is each string the mono output? The strings need some chamber in order to have their vibrations amplified enough to hear, so in this sense, I suppose the whole violin with its hollowed out body is the output source… But I’d love to hear the argument in favor of each-string-as-mono too!

Anyways. I wanted to try to experiment with the sounds my violin could produce. I used John Cage’s experiments with prepared pianos as inspiration. In his pieces, Cage placed different in/on the strings of his piano, completely altering their sonic properties. Sometimes strings were muted, instead lending a kind of percussive quality. Other times, the piano sounded like a distorted electronic instrument whose formats had been shifted. Could I get something similar from my violin?

Process:

Pianos are big, with enough space (and flatness) to stick a ping pong ball between strings. Violins are not big and are not flat between strings. Still, some objects would definitely still alter the properties of strings. I landed on 4 different experiments:

  • a single safety pin,

  • two safety pins (one small enough to clip around string, one large enough to cross two strings),

  • a metal nail

  • and a stripped electrical wire

Each of these objects did a couple of things — they changed the way the violin string vibrated and they also vibrated themselves. As a result, in each of the following videos, the violin produced novel sounds. The safety pins bounced and shifted as they vibrated; plucking strings to optimize for bouncing the safety pins led to the most interesting sounds. All of the objects produced interesting harmonics when they lightly touched the violin strings, however my favorite example is the wire. After wrapping the wire around the G-string, there was a bit of a handle that I could use to slide the wire up and down the string. You can hear the harmonics change as I slide the wire around.

I’ve included clips from each of the experiments below

Single safety pin:

Plucking Safety Pin

Two clothespins

Nail

Plucked nail

Wire

Next Steps:

While not every sound produced here is immediately appealing, all of them do have some unusual property that I could work into compositions. I’d like to record these into a sample bank and then apply audio effects to create some wholly new textures for use in a future piece of music, perhaps in the stereo assignment. Many of the electronic artists I enjoy listening to, from Aphex Twin to Flying Lotus, record their own analog sounds before manipulating them into the samples that sound so foreign; with my prepared violin experiments, I think I’ve found the beginnings of some interesting audio samples to use later.

Another interesting challenge that would keep me in the realm of mono performance: I could try to compose a piece using just my prepared violin. In this assignment, I looked for new sounds and textures without constraining myself around fitting these sounds into a composition. Getting these sounds to work in their raw form without digital manipulation would be a fun, albeit difficult, next experiment!

The A-Minor Music Machine

So that this doesn’t get buried under the reflections below, here’s the sketch. I will update here when the version that works with Arduino is functional.

——

Within 10 minutes of posting this, I’ll probably be editing code again. This project, more than any up to this point, drove me insane and made me so happy.

At various points in the week, I grunted out loud to myself (and anyone in my general vicinity) as I failed to notice a missing “this.”; I enlisted a former software developer colleague to hop on the phone and talk me through questions late at night when office hours were no longer an option; I thought I’d reached a nice stopping place, then thought of one more feature, then looked up and another hour went by.

And in the process, my empathy for all the devs I know just increased exponentially from an already- empathetic place.

With the preamble aside— what was the assignment? What did I make?

The assignment this week was to use objects, classes, arrays of objects, and interactions between objects. I hadn’t yet used sound in a p5 sketch, so I made that a constraint for myself— do all of the above while using sound.

I got inspiration for this sketch from this example; after looking at it I wanted to see if I could replicate the effect on 4 different drum sounds on one canvas. I started by trying to get it working without any objects/classes, which you can see here. It worked! From there, I started to see a “sound” object coming together— something that had a type for each type of drum sound (kick, snare, etc). It would also be able to display itself and maybe analyze its amplitude. Simple, right?

The process of migrating this code to an object-oriented structure took a lot longer than I’d expected, and was the source of a lot of the grunting and requests for help. Ultimately, there were a couple of important lessons:

  • Understand variable scoping:

    • Global variables can be used.. um… globally.

      • When using something like p5.Amplitude this was a really important concept to internalize. p5.Amplitude measures the entire output audio unless otherwise specified. If p5.Amplitude is set globally, it’s not possible to setInput later on and have that used to, say, draw rectangle height like I do.

    • this.object is available for all object instances in the class

    • instantiating a variable inside a function within a class scopes that variable to just be available in the function

      • know when to use this vs a this.object

  • While debugging, isolate each action you are trying to perform independent of other actions.

    • Do this by finding or creating variables to console log in order to see whether the program in functioning as expected

  • Keep simplifying

    • I found myself, and still find myself, repeating some code with only small variations for particular cases. Instead of having a bunch of if statements to describe when to play a sound and the same set of if statements describe when to analyze a sound, maybe there could be a common variable that could pass to both .play and .analyze (in this case, this.sound.)

Pieces

I recorded the drum sounds, chords and bass notes in Logic Pro X and exported each sound as an mp3. I loaded those into p5 and used preLoad to make them available to the rest of the sketch.

I used the p5.Amplitude, setInput, and getLevel functions to get the amplitude of any given sound file. I could then pass that amplitude reading (done in the draw function to get it continuously as the file is played) to the rectangle height parameter to create the drum visualizations. Those are triggered by keystrokes.

The chords are stored in an array. The right and left arrows cycle either up or down the array, resetting if they reach the end of the array. When they are created, they get passed a random number between 1 and 6. That number is used to change the background. A few of the numbers change the background color. Others don’t. The number assigned to an instance of an object is random, so there’s no rhyme or reason to when the background changes other than that it only happens when chords are played.

There are some white dots that appear occasionally. These are an array of “Item” objects that get triggered from within the display function. These appear when a particular random number in the chord object appears when a kick drum has been played. In order to make this happen, I created a state variable to measure whether the kick drum has been played (but resets after it gets played). When the kick (and clap, same logic) has played and the random number coincided, these ellipses appear until the condition is no longer true.

The bass notes trigger the triangular visualization in a similar way.

All in all this has been a really rewarding project to really ramp up my understanding of a number of things. As an added bonus, it is a great foundation for working with serial communication in PComp: the variables controlled by keystrokes will be controlled by buttons and other things out in the real world!

PComp experiment to-be

I want to make a Midi controller with my Yeezy shoebox. I built the enclosure last week and just acquired the parts I need to try to get this to work. In conjunction with this week’s material covering serial communication, this seems like a good way to test my understanding.

The logic seems fairly straightforward:

I will connect the potentiometers to analog outputs and connect the buttons to digital outputs, which should send to a midi receiver that I’ll connect to the breadboard and midi port on the Arduino. I’ll write a program that both brings in the Midi library to the Arduino IDE and that sends midi out. I’ll need to convert the analog output to midi’s 0-127 range.

In the midi code, I think I’ll need to map the potentiometers and buttons to midi ports, which my digital audio workstation (Logic) will understand. I’m not sure this is necessary— it might be possible to do this mapping as long as Logic sees that midi is being sent.

I’ll be interested to see where there is more complexity than I anticipated. I’m sure there will be. I’ve found some tutorials online that should help, although I don’t know that any are exactly what I need!

EDIT: I just found this lab on the PComp website! I will follow along with it to see if it gets the job done with my Nano, even though I got an Uno and Midi socket based on the other tutorials I had seen.

Yeezy Boost Midi Controller Enclosure

This past weekend the internet was abuzz as Kanye West held listening parties in New York, Detroit and Chicago for an upcoming album, meant to be released on Sunday but yet to be seen. For much of my life, Kanye’s music and approach to creative output has been inspiring — and as Yeezy Season approaches, I remembered that I had kept the box for a pair of shoes I purchased a year ago. Seems like an appropriate time to put it to use.

For this project, I decided to start building a MIDI Controller out of the shoebox. I’m going to be working on the wiring and Arduino programming this week to ensure the piece is functional. For now, I prototyped the buttons and knobs.

public.jpeg

When I work on the next set of steps, I know I’ll need to do a couple of things to ensure the knobs and buttons work properly, both electrically and vis-a-vis user experience: 1) place a firm platform underneath the buttons so that they click when pressed 2) Screw potentiometers through a second, thick piece of cardboard placed inside of the box to ensure the knobs turn properly.

I’ll likely create the platform mentioned above with acrylic in standoffs, creating housing for the circuit board and wires that will all live in the inner shoebox. I may need to further alter the inner shoe box to ensure that it doesn’t bump into mounted items when pulled out of the larger box.

10 Years of Waves

I recently found some music that I made in college, including the very first tracks I ever made. This academic year will mark 10 years since I graduated— it’s a good time to look back at who I was and who I’ve become. For this project, we were asked to use a color palette of 6 colors to represent ourselves and create compositions with just those colors. For the work below, I used the first 10 seconds of each of the 6 early songs that I found and visualized those as wave forms. I applied color to those waveforms and then placed them on a black canvas with white text of the number 10. I sized the waveforms differently and zoomed in close to the waveform to obscure the number 10 and create compositions that sometimes skewed each/any of the components of the image.

Below the gallery, I included the image of the full canvas. Note: when actually creating the compositions, I sometimes moved/deleted waveforms and the numbers, so what’s on this canvas now is not how it was set up for the compositions.

If you want to hear any of the tracks used for this piece (and maybe try to match the track to the composition), here is a private Soundcloud link.

10 master.png