State of my live music system & other musings
After some site upgrades it's become much easier for me to write posts. The site has better post management (for example, I can now store drafts on-site), and a better editor that uses markdown, rather than what I had before (which was a WYSIWYG editor embedded in flask admin). I've learned a lot building this site and it continues to be surprisingly adaptive to my needs
Anyway, recently I did two shows. The first show was a new spin on material from the March show, and the second show was all new. But both debuted an improved live set up that I dedicated (and hope to continue to dedicate) a lot of time developing. I want to once again share details about how that is all going and what's new. I've called the system "emsys" but I don't know if that will stick long term!
While I consider "emsys" more of a culmination of all the equipment I'm using put together (it is, after all, em's—that is to say, my—system, in its totality; and all the pieces matter), the most "custom" part of it, and the part I'm going to talk about most here, is the software I have developed for my Raspberry Pi 4 using mostly Pure Data and Python. So how did I end up here after my gig in March?
Desync: The heir to the MIDI throne
I started by reviewing two issues - the first was MIDI clock desync, which occurs for all manner of reasons including but not limited to MIDI bandwidth, tempo fluctuations, and the fact that time passes (yeah that bad). MIDI clock desync had occurred at the end of the March set, so something needed to be done.
The second issue was more of a limitation, and that is "pattern" desync between the Monomachine & Machinedrum (+MCL). To clarify, this is when the two synths have patterns of different loop lengths, and could not come back into sync unless the transport was restarted (pressing stop and then play). This is because the Monomachine has no way of interrupting a currently playing pattern; it can only queue. Naturally you could just avoid ever having the Monomachine's loop length be indivisible by MCL's, and you would never run into this - but this is a significant limitation for me as someone who loves polyrythm.
Fortunately, the solution I eventually came up with resolved both of these issues at once, but I'll take you through my journey getting there.
So, let's talk about MIDI clocks... Originally I had planned to continue using the Machinedrum's internal clock to run everything as it had done in March, only with my growing Pure Data patch in the mix. I implemented a system in the Pd patch that would keep track of MIDI latency from the clock source, and the time between beats, so that it could predictively stop and start transport - essentially resyncing the synths on the fly in a way that sounds fluid. (This is what I've previously called "forced interrupts" or "FIs" in other posts, but I now call it "transport interrupts" / "tins").
I made it work, but it was a total hacky mess and not easy to expand on. I got that feeling; the one where you're blindly developing a system, and you look back at it, and it starts to feel a bit ridiculous. That's when I knew I needed to switch course and reorient myself in a different direction for the sake of establishing good foundations for long term expansion
So, I started thinking of ways I could use an internal clock running from my Pi instead. That way, Pd would know when to restart transport since there was no latency concerns (since both Pd and the clock would be running on the same system). I had actually tried this before, but I couldn't work out how to make it stable and easy to manipulate externally. This was mainly thanks to two things: (1) the USB protocol introducing jitter to the timing of MIDI signals, and (2) being unable to find a piece of software that could run a clock and let you adjust its tempo interoperably (like from a Pd patch).
For the first issue, I did some research and ended up acquiring the pisound HAT, which gives the Pi native DIN MIDI ports through which I could send clock data without USB jitter. (After testing it turned out not quite as precise as the Machinedrum's, but good enough for 99% of purposes).
Since I was now using Pisound, I changed the Pi's OS to patchbox, the OS created for Pisound, which is based on Debian 12. It offers an easy way to install a realtime kernel for further minimising latency, as well as support for Pisound's special signature button which I later configured to perform basic actions like restarting the Pd patch, rebooting, and shutting down the system.
For the second issue (finding MIDI clock software with external controls), I tried lots of different solutions. I first tried wielding JACK's transport with jack_midi_clock, then seq24, and seq64 / seq66 - but all were more complicated than what I needed, and didn't seem to allow externally controlled BPM. In the end, I found a project on github which very simply generated a MIDI clock from Python with mido & rtmidi. I knew Python! It's what I made this site in (mostly). So I examined how that worked, and then I frankensteined it for my purposes; stripped away the UI, gave it virtual ALSA MIDI I/O connections, and a way to switch it on/off & have its tempo adjusted interoperably through those virtual connections (which another Python script maintains). I also adjusted the sleep interval between ticks to reduce CPU load to around 18-20%, which ought to leave enough room for whatever I have Pd get up to in the future (I hope ).
The Python script that maintains virtual MIDI connections, as I mentioned, is another aspect of this system. It continuously matches ports from aconnect -l
and establishes needed connections when they appear. This means there is no problem if any particular device fails or is disconnected, as it will automatically hook it back in when available. These needed connections would change over time to meet the demands of my growing Pure Data patch, and all of these scripts were given systemctl services so that they would run on boot too.
Segment data & msets: The nuts & bolts
The patch itself is currently designed to take care of the scheduling of material through what's essentially a more sophisticated version of "song mode" (p59) present on the Elektrons. Individual "segments" group several pieces of information together about any one point in a set. Most principally, Elektron bank IDs (p45) (e.g. A09, C11, H02...) for both the MD (via MCL) & the MnM, tempo information (bpm, ramp time), loop length, etc.
All of these segments are stored in .mset
files, which can be switched out. There is also a function to calculate the estimated duration of the mset you're working on, in order to help with pacing.
When designing sets, the patch keeps track of which Elektron banks aren't in use by a segment. This coincides with an "insert segment" function which creates a new segment that points to the first available unused banks. In reality the Elektrons may have content at those banks, but in my system, it regards those as blank and we can overwrite them. In other words, when we delete a segment, it's treated blank even if the content hasn't been removed from the synths - sort of like when a hard drive deletes files, it just labels them as disposable rather than physically removing them
This way of doing things drastically simplifies the cumbersome process of copying and pasting around patterns to different banks on the Elektrons in order to consolidate material. Better yet, it abstracts away the idea of there being any sense to the order of banks altogether, and ensures full use of the MnM & MD's total available pattern & kit space.
The goal with all this is so that there is as little overhead as possible while designing material, and performing material. In the studio I can develop material without counting bank numbers, and while performing, I can just focus on manipulating the material currently playing, and move on when I want to (with the hold function).
In future I may group segments into what I may call mtracks and mbridges, which would allow managing material in a more modular way - scheduling a series of segments of material, and bridging from one to any other.
As for how the system moves from one segment to another, there is a big cluster of checks—developed with 5% intuition and 95% trial and error —that decide when to do and when not to do things. The incoming MIDI clock is broken down into a series of [counter] objects that keep track of beat divisions, and messages are sent at different times according to the beat divisions to schedule or queue changes. For new seg data, MIDI program messages (which trigger bank changes), etc., this is done a few beats before the next phrase — this is the "load bang". For transport interrupts, the stop signal is sent 1/32nd note before the start of the next phrase, and is disallowed when a program message is queued for MCL - as this causes MCL to load MD data wrong (one of many checks in place to keep this all working as best I can).
All the more intricate, musical, sound design & performance information is managed by MCL and the Elektron synths directly. How they work as systems are well documented elsewhere, such as the Machinedrum, Monomachine & MCL manuals, and you'll see there is plenty that can be done (and is done) to twist ideas upside-down, or add and remove things on the fly. I could go into the what and how, but that gets into my creative process which isn't the focus for now
I will say I have some ideas that build off of those systems that I plan to try implementing into emsys. For example, having certain pertinent parameters slowly drift around randomly, particularly MnM parameters, since it needs more love... MIDI bandwidth permitting.
The face of emsys: a reverse engineered MiniLab 3
To manage these features and parameters we need good UI. At first I tried having Pd UI elements on a touch screen connected to the Pi. I took this to its conclusion by designing an interface and testing it all out, but in the end, having to interface with UI elements on a touch screen was too fiddly, and the screen had exposed internals that would've needed housing
I shifted gears. I thought back to university, when I was designing df-s2 - my first time trying to design a live music program, back then in Max/MSP. For that I used a Novation SL Zero to control the patch. The SL Zero has two LCD screens on it which can be written to with sysex messages, so I used those sysex messages in my patch to write UI feedback on the screens.
Fast forward to this year, I recently bought an Arturia MiniLab 3 for my studio. I was already using this controller in the March set to control MCL perf macros (p77), so in trying to expand its usage beyond a glorified slab of knobs, I did some research about what else I could do with it.
What I found was that a musician, Janiczek, had very graciously posted a Github Gist on the MiniLab 3, which detailed reverse engineered sysex to take control of its small OLED screen. The sysex data was gleaned from Arturia's Bitwig controller script.
Compared to the SL Zero's LCD screens, one advantage right off the bat with an OLED screen is that it updates information much faster and without leaving behind a trail of artifacts. This excited me so I got to work and rushed in some UI feedback from my Pd patch via sysex messages.
For emsys 0.1, which was used in both of the October sets, I used the ML3 to monitor the current segment ID, current phrase, beat, tempo, whether I had scheduled transport interrupts, whether I was holding on the current part of the set, among other things. I could also edit all this information with the ML3's knobs and sliders, and copy, paste, delete and insert segment data to structure my sets.
I have improved this dramatically since October with emsys 0.2. There is now a modular screen & line manager system. Now I am able to quickly put together menus, temporary alert messages, parameter information, etc. with a syntax I came up with for structuring information on the ML3's two available rows of text.
What I mean by "modular" in this case is that all screen information, for all screens, is stored and updated at once, but we simply select which to view at any given time, and we can add additional screens, or provide additional information, by simply adding another screen manager module.
I even made a startup animation — a blinking face. It feels like my own completely one of a kind device, and that feels really special
For managing the controls, this was fairly straightforward through CC messages. The Shift button, rightmost pad, and—funnily enough—the sustain pedal, act as function keys to modify the behaviour of other knobs and buttons. The pads do most of the performance functions, like triggering tins, holds, managing transport, etc. Whereas the knobs (currently) are used for editing segments. The only exception is the leftmost knob, which controls the tempo of my clock, and queues / switches segments when pushed in & turned.
There are undoubtedly things I have missed in my post... But that's okay. See you next time
Brief unhinged speculation on SLMs
Been having some thoughts about alternate arrangements for how I currently perform my music to allow for more freedom. Henceforth I'll dub these speculative ideas SLMs (speculative live mechanisms) and treat them as an object to be linked to other ideas in a grander system of thought. The extra unhinged ones can be called USLMs!
I am toying with the idea of incorporating a pure data patch on a raspberry pi that will serve to replace the MIDI merge box, as well as provide potential SLMs via the USB MIDI IN port on MCL, and some kind of redirect to the MnM via MCL PORT 2. Control would come into the pi via USB MIDI from a MIDI controller, similar to my current system with the MiniLab.
Here are some random unorganised SLM ideas:
-
Pd song mode for MnM & MCL - PGM IN for both, handled separately so that pattern slots do not need to be aligned beforehand; which means no automatic PGM OUT from MCL to MnM unlike how I currently do things, which is very laborious. Instead, PGM is read from Pd in sequence from a list (txt), with varying control prescribed to the performer of when to stay or move on. Syntax of song mode data (SMD) may look like:
a. MCL #1-128 / MnM #1-128 / Length: Steps (1-64) or Duration (00:00:00.000). e.g. 12,15,64 or 73,10,00:03:13.500 - perhaps with tempo markings.
-
Forced interrupts (FI) of MCL/MnM/MD can be used by queuing a pattern change via PGM message followed immediately by a Play message (still needs testing on MCL).
-
"Bent out of shape" / BOOs - Parameters gradually and/or randomly malformed to create continuous variation. Certain parameters and parameter macros will be relied on for the interest they create, and randomly selected or prescribed in a list (txt).
-
Kaiser inspired generative sequencing of MnM POLY mode, or individual MnM tracks.
-
Pd sampler - to expand the functionality of the UW capability of the MD by creating my own sampler in Pd with (good) sound output from the Pi.
Currently these are speculative ideas as stated before. They will probably not see use in the next sets that I do, but I hope to explore their use in the future.
Gigging and my health
I wanted to shed some light on some behind-the-scenes stuff pertaining to the recent gig for Stimulacrum @ The Bee's Mouth, and also touch on some of the struggles I've been having with regard to gigging and my health.
The set was performed on a pair of Elektron synthesizers - a Monomachine and a Machinedrum UW, with custom firmware (X.01A and X.10), a MegaCMD with MCL 4.51, an Arturia Minilab 3, and a 3x3 MIDI merge device (U6 MIDI pro) to route everything. It was mixed with a Mackie Mix8 and recorded with a Zoom H4N Pro recorder from the tape output.
The routing was essentially:
- MD in -> MCL out 1
- MD out -> MCL in 1
- MnM in -> U6 out 1
- MnM out -> U6 in 1
- MCL in 2 -> U6 out 2
- MCL out 2 -> U6 in 2
- MiniLab out -> U6 in 3
The MIDI merge routing was something like:
- In 1 -> Out 2
- In 2 -> Out 1
- In 3 -> Out 1 & Out 2
And the Minilab was configured to either ch9 or ch10 depending on whether I was sending notes to the MnM or MD, respectively. The four sliders were mapped to the MCL perf controls, which are like macros that control a range of parameters across the synths, and differed depending on where I was in the set.
In terms of preparing & arranging the material... For the MD patterns, MCL was useful because I could copy and paste multiple patterns at once, though tedious. On the MnM, I could copy and paste patterns individually or save multiple patterns via sysex to and from my PC with Elektroid via the U6 (since the U6 has USB MIDI ports too), but it required a lot of forethought to plan out exactly how I wanted to move my patterns around, and most of the time I was moving patterns around to try and figure out transitioning from one idea to the next. Also, since I was sending PGM out from MCL to the MnM, I needed the patterns on both synths to be lined up perfectly - A01 and A01 go together, A02 and A02 go together, etc.
It has to be said - arranging and preparing my material in this way has been quite a time-consuming process, but tedium is normally fine with me, and I think the result is worth it.
A few issues I encountered were that rapid tempo changes between patterns could make the MIDI clock fall out of sync between the two synths... So I relied on speeding up the tempo pre-emptively at a couple of moments during the set. The other issue was that it seemed like the MnM could get overwhelmed by the CC coming in from the perf controls from MCL, which could cause more desync. This happened in the last minute of the set, actually, which is why I decided to end it with a tempo slowdown to try and work with the chaos, but I thought that turned out to make for a great ending anyway. It could be that this is not MCL's fault, and is actually due to the MIDI merge becoming overwhelmed, so I'll have to do some testing.
Another curious part of the process was processing the samples to be used on the MD. Since it only has 2.5MB of space, and only accepts uncompressed lossless audio, removing silence from audio as much as possible was vital, and downsampling from 44.1 to ~22kHz could also be used to save space where the higher frequencies weren't as important for a particular sound. Another way to address this has been to speed up audio and then slow it down when using it as needed, which can allow for longer samples - though I didn't end up doing that too much. Some of the things I used as material were the opening to Mako Mermaids, various SFX from White Day (2001), some basic drum samples, and even a couple of experiments I did with AI voice generation with Bark a few months back (heard as reversed whispering at some point).
As for the music itself... It comes from jams ranging from 2021 until now. The start was 2024, then it shifted into 2022-2023, then the part with the spooky chord progression is 2024 again, and the final track is a revival of something I did in 2021. Some of my favourite things about the custom firmware is the way it lets me randomise trigs, use different loop lengths per track, and do microtiming.
I have given the parts track names too:
- slifer the sky dragon
- puzzle pieces
- haunted house
- wwuw wuuw wwwu
- candy slices
- churns unending
- dancing in the rain
I enjoyed the gig a lot and I'm thrilled by the response it got, but it and the process that led up to it were marred a little by some health issues that I've had for the past year, which reached a peak in the days leading up to it, and which in fact led me to the hospital in the early morning of the 23rd (the day before the gig). I've had issues with a tender pain/ache in my chest, paired with dizziness, breathlessness, muscle weakness, and mild nausea, and haven't been sure what to make of it. I've received ECGs, blood tests, and examinations with a stethoscope, at least a couple of times each, but they've all come back normal each time. The only thing I was told about me physically is that I'm low on potassium, in a nutritional sense, which prompted me to eat lots of bananas.
I've been told that this may be a kind of psychosomatic, stress, or anxiety- induced issue. Initially, I acknowledged that it could be my anxiety, or at the very least, it's something which can't be helping things. But now after coming back clear many times and researching what these kinds of psychosomatic symptoms can be like, I think it's very possible I have developed a kind of panic disorder, which is annoying because I feel like the past year has been an improvement, mentally, from quite nasty years past.
I have learned that these panic symptoms can happen very randomly and be very extreme for what they are. In moments, my symptoms make me very fearful; like I'm going to die. My body locks up as I find it harder and harder to breathe, and it most often happens at night, causing my sleep to take a hit too. But I emphasise that they are truly random - I haven't been able to pin down any particular cause, and it feels like it's the arrival of the symptoms which cause my fear, rather than the other way around.
There is a moderate chance of it subsiding if I get distracted or pulled into something that I don't have to think much about. Talking to a friend, playing a video game, etc. (Super Mario Sunshine has been good to revisit). At rest, it gets worse - It's like my brain is wired to fill up with thoughts at every available moment, and when nothing is going on, it nervously tries to seize the moment without my consent or awareness.
Anyway, I'm having more tests done soon, including a heart monitor, as it still may be a physical condition.
I just hope that I can cope with it going forward. It would suck if coming out of my shell, and beginning to commit to things such as this gig, is making it worse. After all, I want to play more. But I haven't yet been able to shake the negative association formed with doing so.
Thanks for reading my first lengthy post. I will release the recording / master of the set very soon, and shortly after I plan to release the one I did back in 2022 as well (which was like an early run of a lot of the concepts I've talked about here).
A website my very own website
Welcome! This is my first post.
I still have lots of work to do... Fill the site with content, other various bug fixes. The style isn't quite how I want it yet but this should do. It's been a lot of fun building this from the ground up.
I want to use an emote here but I haven't added functionality for that yet.
Anyway, bye for now.
UPDATE: Now with emotes