Listening to Bio-Signal (Or: JAWZZZ)

Assignment: Expose a signal from some under-represented part of your body.

Idea

Our bodies produce signals that we can’t see, but often can feel in one dimension or another. Whether pain or restlessness, euphoria or hunger, our body has mechanisms for expressing the invisible.

Some of its signals, however, take time to manifest. Small amounts of muscle tension only convert into pain after crossing some threshold — which, in some cases, could take years to reach. I clench my teeth at night, which only became visible a few years ago when the enamel on my teeth showed significant wear and tear. At various times, I had unexplainable headaches or jaw lock; but for the most part, my overnight habits were invisible and not sense-able.

With technological prostheses, however, we can try to shift the speed at which we receive signal. This week, I built a muscle tension sensor to wear on my jaw while sleeping with the hope that I could sense whether I still clench my jaw. Long story short: I most likely do still clench my jaw, but without spending more time on statistical analysis of my results, it’s not wise to read too deeply into the results.

I’ll go over the process and results, but perhaps the most important reflection in this whole process is that even in my 3-day experiment, it was possible to see the possible pitfalls that accompany trying to quantify and infer meaning from data in situations that include even minimal amounts of complexity.

Process

This experiment required the following pieces:

  • Wire together a muscle tension sensor and microcontroller

  • Send data from the sensor to a computer

    • I used the MQTT protocol to wirelessly send data from my Arduino to a Mosquitto server

  • Write the data from the server to a database

    • I used a node.js script to listen to the MQTT data and write it to a local SQLite database on my computer

  • Analyze data from the database

[As a side note: prior to this assignment, I had not used a number of these different technologies, especially not in such an interconnected way. The technical challenge, and the opportunity to learn a number of useful skills while tackling these challenges, was a highlight of the week!]

I started by assembling the hardware and testing on my forearm to make sure it worked properly:

IMG_4615.jpeg

I then moved to testing that it could sense jaw clenching (it did):

IMG_4632.jpeg

Ultimately, I put it to the test at night. The first night I tried to use the sensor, my beard seemed to interfere with the electrodes too much. In true dedication to science, I shaved off my beard for the first time in years :P It seemed to do the trick:

Adjustments.jpeg

Results

OK, so— what happened?

First, the basics: This data was collected on Saturday night into Sunday morning for ~8 hours. I wore the sensor on my right jaw muscle and took 2 readings per second the entire time.

And a few caveats: this is only one night’s worth of data, so it is really not conclusive whatsoever. It’s really just a first set of thoughts, which can hopefully be refined with more data and Python know-how. I also did not capture film of my sleeping to crosscheck what seems to be happening in the data with what actually happened in real life.

With that said, here’s one explanation of what happened.

Throughout the night, it’s likely that I shifted positions 5-10 times in a way that affected the sensor. In the graph below, there are clusters of datapoints that appear like blue blocks. Those clusters are periods where the readings were fairly consistent, suggesting that I may have been sleeping in one consistent position. These clusters are usually followed by a surge in reading values, which happen when the sensor detects muscle tension, but also happened when I would touch the sensors with my hand to test calibration. When sleeping, it’s possible that I rolled over onto the sensor, triggering periods where the readings were consistently high.

annotated jaw analysis.png

During those fairly-stable periods, there are still a lot of outlying points. By zooming into one “stable” area, we can look at what’s happening with a bit more resolution:

Screen Shot 2020-02-23 at 6.36.50 PM.png

This is a snapshot of 1 minute. During the beginning of the snapshot, the sensor values are clustered right around a reading of 100. Then there is a gap in readings— the readings were higher than 400 and I didn’t adjust the y-axis scale for this screenshot— then they return to ~100 before spiking to 400. The finally begin returning to an equilibrium towards the end of the minute.

jaw analysis 2.png

This could be evidence of the jaw-clenching that I was looking for initially. It would be reasonable to expect jaw clenching to last only for a few seconds at a time, but that it could happen many times in a row. Perhaps this data shows this in action — I am sleeping normally, clench my jaw for a few seconds, relax again for 5 seconds, and then clench my jaw for another 5 seconds before letting up.

Ultimately, it looks like this sensor data may unveil 2 behaviors for the price of 1: shifts in sleeping position + jaw clenching!

Reflections

In order to make these insights somewhat reliable, I need to do a few things:

  • Collect more data

    • This is only one night’s worth of data. It’s possible that this is all noise, the sensor didn’t actually work at all, and I’m just projecting meaning onto meaningless data. A bigger sample size could help us see what patterns persist day after day.

  • Collect data from different people

    • In order to validate the hypothesis that high-level clusters explain shifts in position and more granular clusters/outliers show jaw clenching, I’d need to try this with other people. I know that I clench my jaw, but if someone who doesn’t clench still has similar patterns in data, I’d need to revisit these hypothesis.

  • Validate insights against reality

    • If I had video of my night, or if some house elf took notes while I slept, we could tag different actual behaviors and timestamps. Capturing shift in position should be relatively easy to do, as long as I get the lighting figured out. Clenching might be harder to capture on video.

  • Statistical analysis

    • I used the scatterplot to see obvious visual patterns. Using some clustering analysis, I could understand the relationships between clusters and outliers at a more detailed level.

Beyond what I could do to improve this analysis, I think there’s a bigger point to make: we should be skeptical of the quantified data we are presented with and ask hard questions about the ways in which the presenters of data arrived at their conclusions. In my experiment above, I could have made some bold claim about my sensor being able to detect sleep positions and TMJ-inducing behavior, but the reality is that the data needs a lot of validation before any insights can be made confidently. While academia has checks and balances (which themselves have a lot of issues), the rise of popular data science and statistics has not been coupled with robust fact-checking. So — before going along with quantified-self data, make sure to ask a lot of questions about what might be causing the results!

Thanks to Don Coleman and his course Device to Database — extremely helpful for this technical implementation of this project.

Plant Monitoring Sensors

Assignment: Use temperature, humidity, soil moisture, and UV sensors to monitor a plant’s environment + send that information to an MQTT server.

Process and Results:

Setting up the sensors was fairly straightforward; I followed these templates for soil moisture and temperature/humidity, then added readings for UV, illuminance, and pressure following the same steps we took to get the other readings into the right format. To make changes to the code I needed to 1) create a new MQTT topic for each reading, 2) define the sensor reading variable, 3) actually send the MQTT message and 4) print out the new readings to the serial monitor.

Initially, I kept the sensor readings at 1 per every 10 second interval, but today I switched it to every 2 minutes. This may be too short as well— the readings are unlikely to change a whole lot in the building— but in case we want more data to analyze in class, I over-collected. I’ve been able to get consistent readings so far and am looking forward to learning how to get these readings into a database during class.

My plant is located along the hallway outside of the ITP floor, in front of the second bathroom stall, with a blue label attached to it.

You can find the source code after the pictures below.

IMG_4624.jpeg
IMG_4618.jpeg
IMG_4621.jpeg

Source Code

Find the source code here.

Yeezy Boost Midi Controller Enclosure

This past weekend the internet was abuzz as Kanye West held listening parties in New York, Detroit and Chicago for an upcoming album, meant to be released on Sunday but yet to be seen. For much of my life, Kanye’s music and approach to creative output has been inspiring — and as Yeezy Season approaches, I remembered that I had kept the box for a pair of shoes I purchased a year ago. Seems like an appropriate time to put it to use.

For this project, I decided to start building a MIDI Controller out of the shoebox. I’m going to be working on the wiring and Arduino programming this week to ensure the piece is functional. For now, I prototyped the buttons and knobs.

public.jpeg

When I work on the next set of steps, I know I’ll need to do a couple of things to ensure the knobs and buttons work properly, both electrically and vis-a-vis user experience: 1) place a firm platform underneath the buttons so that they click when pressed 2) Screw potentiometers through a second, thick piece of cardboard placed inside of the box to ensure the knobs turn properly.

I’ll likely create the platform mentioned above with acrylic in standoffs, creating housing for the circuit board and wires that will all live in the inner shoebox. I may need to further alter the inner shoe box to ensure that it doesn’t bump into mounted items when pulled out of the larger box.

PComp Interaction Fail IRL

If you live in a city with crosswalks, you’ve probably seen something like this.

public.jpeg

And if you’ve seen this button, you’ve probably wondered whether it actually does anything. The signage suggests that if a pedestrian presses the button, the walk sign will appear and traffic will stop. Of course, traffic doesn’t (and probably shouldn’t) revolve solely around the pedestrian, so the traffic cycle will play out before changing to walk. Often, even if the button is never pressed, the cycle will rotate through walk and don’t-walk signs with no human input. In most of these cases, it’s really difficult to know whether pushing the button has any effect on shortening the cycle. As a result, if you wait on the corner and watch people interact with this button, they will often either not push it at all despite needing to walk, or they’ll repeatedly push it, getting more and more annoyed that nothing is happening.

It’s possible that some of these systems are just placebos that make people feel in-control of the traffic cycle. It’s more likely that the system actually does work in some capacity, but doesn’t communicate what is happening in a satisfying manner. In it’s current state, this feels like a failed interaction, even if the system works as intended.

What could be improved? I’ll offer one suggestion for now. Let’s start by thinking about the different parts of this system:

1) Assume this is a simple intersection with 2 directions of car traffic, left turn lane protections for cars, and 2 directions of pedestrian traffic. There will be walk signs in both directions of traffic, along with stoplights for cars.

2) There is some traffic cadence e.g. 45 seconds for one direction of traffic, then 45 seconds for the other, with a 5 second pause in between.

3) This could be broken into two ranges: the minimum amount of time for traffic to flow before changing could be 30 seconds, and the maximum could be 1 minute.

In this setup, the default could be set to the maximum cadence time— traffic flows for a minute before switching directions. Pushing the button could set the cadence to its minimum of 30 seconds.

Right now, the frustration is generally due to the lack of feedback to the button-pusher. What if there was a simple countdown clock under the button that turns on after the button is pushed? Once the button is pushed, however much time is left on the traffic cadence cycle would be displayed on the countdown clock.

Perhaps this exists already— I have hazy memories of seeing better versions of these walk buttons outside of the US— but if not, I hope to see something like this soon! Even as cities switch to using motion detection for stoplights— having some simple feedback for pedestrians (and perhaps drivers) to understand what is happening under the hood generally will make for a more empathetic, less-frustrating experience.