User:Lafleur/dogfooding

From postmarketOS

embedded software synthesizer

I use to work in a street theatre company. We sing and play percussion instruments while we walk (among other things, see https://onyrikon.org ). Among those instruments are keyboards. This creates a quite demanding use case, as the musician carries both the keyboard and the sound system (we already own those). In this scenario (going live in 45 days), I proposed to use an embedded system to both power the keyboard, and read its MIDI signal to output sound to the sound system. I first implemented this on a Raspberry Pi zero, but the processor finally appeared to be lacking the needed punch (I think it's a 120MHz processor - back to the nineties !). I quite recently discovered postmarketOS, and quickly felt it should be the fitted concept to address this scenario.

the use case

The embedded system has to be as light as possible, and yet provide power for itself and the MIDI keyboard. It also should run a software synthesizer reliably, and output high fidelity sound on a mini-jack output. A critical mission for this system is low latency : talking with sound engineers, the acceptable threshold should be around 20ms. In the end it's the pianist that can judge the playability of the system. At the company we already have selected an ensemble of three sound fonts that should be embedded ; one of those reproduces an acoustic piano and is very demanding : the sound samples are far heavier than on the other fonts. Besides, the system has to be open-source, mostly because it's the only way I know of staying in charge for what we do with it.

the setup

The Snapdragon architecture seemed a rightful successor to the Rasperry Pi, fitted for the purpose : 4 cores at around a GHz seemed quite enough for the job. I chose the Samsung A3 phone because it's mostly supported and very light (110g). Nowadays I expect most phones to have a decent audio output quality. Its main drawback is that the battery is non-replaceable.

The stable software available for this use case seems to be fluidsynth. There are other software synthesizers out there, but this one I tested for years, and I quite understand its behaviour, which is very much needed, as you will see if you keep reading.

the process

USB On-The-Go power supply

After installing pmos on the phone, the first surprise was that it did neither recognize or power up the MIDI keyboard. It features an RT5033 MFD that is in charge of distributing power to the micro-USB plug and the camera. There is an open-source driver for this MFD in the downstream kernel, but it never made it to mainline. Using this source and a bit of i2c discovery (playing with modules load/unload to find out the i2c address of the MFD), I came up with the following script :

file : /usr/local/bin/toggle-otg-power
#!/bin/sh

# This is a simple script to turn USB OTG power supply on or off
# on device Samsung A3. It talks to the RT5033 MFD on i2c bus 7
# at address 0x34.

bailout() {
	echo "Usage: $0 off|on"
	exit 1
}

i2c_assign_bits() {
	bus=$1
	address=$2
	reg=$3
	mask=$4
	data=$5

	value=$(i2cget -y $bus $address $reg)

	# Quit if i2cget failed :
	test "$?" != 0 && exit 1

	value=`printf "0x%x" $((value & ~mask))`
	value=`printf "0x%x" $((value | data))`

	i2cset -y $bus $address $reg $value
}

test "$#" != 1 && bailout
test "$1" != "on" && test "$1" != "off" && bailout

if test "$1" == "on"; then {
	i2c_assign_bits 7 0x34 0x02 0xfc 0xdc
	i2c_assign_bits 7 0x34 0x01 0x01 0x01
	i2c_assign_bits 7 0x34 0x01 0x02 0x00
	logger "turned USB OTG power on"
} else {
	i2c_assign_bits 7 0x34 0x01 0x01 0x00
	logger "turned USB OTG power off"
} fi

My first error was to automatically trigger the USB power supply on at boot-time with an openrc service. This prevented the phone from charging when a power supply was plugged in, because it kept trying to feed the USB cable with its own power. When I realized this, it was too late and the phone couldn't boot anymore. Hopefully I left it unattended for 24 hours, and booted to lk2nd mode, where it could recharge for a while, and then rebooted to pmos, and disabled the openrc service. After that I added an udev rule to let it toggle the USB power supply only when needed :

file : /etc/udev/rules.d/10-usb-otg.rules
# Detect usb gadgets that would need OTG power supply.
SUBSYSTEM=="usb", KERNEL=="1-0:1.0", ACTION=="add", RUN+="/usr/local/bin/toggle-otg-power on"
SUBSYSTEM=="usb", KERNEL=="1-0:1.0", ACTION=="unbind", RUN+="/usr/local/bin/toggle-otg-power off"

The 1-0:1.0 kernel signature gets triggered only on USB gadget plugging - plugging the phone in a computer or a power supply doesn't trigger it. When unplugging, the "remove" action is unusable, I don't know why ; triggering on "unbind" works fine. Now you should have USB gadget magically powered on when plugged in.

using MIDI devices

OK, now you have USB power - but the MIDI keyboard still doesn't show up : the default mainline kernel doesn't feature the necessary modules. You'll have to compile your own, adding those modules. I did it, but at the moment I can't pull the details from my hard drive, it's a bit of a mess on this side. I'll update if I can. Hopefully if I ask kindly the pmos team will provide them for the future mainline kernel releases, so you won't need to bother with this.

requesting the audio device

Some processes are worse than zombies - you can kill pulseaudio as much as you like, it always comes back. And if you try to use an alsa backend for whatever program, it eventually outputs to pulseaudio (you can see it in the Settings>Sound panel). To get rid of it, you need the following file :

file : ~/.pulse/client.conf
autospawn = no

Now on the next session load, you'll be free from it.

realtime priority

You may setup fluidsynth as an openrc service, in which case it will be run as root, and have realtime priority rights by default. But if you start it as a user process, it will not be able to tweak its realtime priority unless you modify the system's /etc/security/limits.conf . You may append the following line to it :

user - rtprio 90

But limits.conf is a PAM feature ; in alpine (pmos is alpine, remember ?) you will need to add the shadow package so that PAM is used by all login processes ; and (as of today) you need to add the following line in /etc/pam.d/autologin, before the first session ... line :

session required pam_limits.so

Why autologin ? Because it's the file sourced by PAM when tinydm starts the user session (it took me ages to figure that out). To make sure this works, check fluidsynth logs : if it warns you about not being able to set realtime priority, then you're not done with it.

Also, ssh uses another PAM source file, so remote shells won't benefit from this setup. I didn't dig into this because it's not relevant here.

running fluidsynth

In the end I prefer to run fluidsynth as a system service, so that if for some reason the tinydm session should fail (which does happen from time to time), at least the fluidsynth service keeps running. Here are my openrc setup files :

file : /etc/init.d/fluidsynth
#!/sbin/openrc-run
# Copyleft.

description="fluidsynth software synthetizer"
command="/usr/bin/fluidsynth"
command_args="$ARGUMENTS $SOUNDFONT"
command_background="yes"
pidfile="/run/${RC_SVCNAME}.pid"
start_stop_daemon_args='-3 logger -4 logger'

depend() {
	need localmount udev alsa
}

The start_stop_daemon_args let fluidsynth output its stdout and stderr to the system logs, so you can read them with eg logread -f. That's all there is to it. I like how straighforward openrc service files are, don't you ? And there is the bundled conf file :

file : /etc/conf.d/fluidsynth
# On samsung a3ulte, hw:0,0 is the earpiece _or_ the minijack when plugged in,
# hw:0,2 is the speaker.  On linux, alsa_seq is the default midi driver. Audio
# period and period_size are fit to a3ulte.

audio_backend=alsa
alsa_device="hw:0,2"
# Defaults to 64 :
period_size=256
# Defaults to 16 :
periods=2
samplerate=48000
midi_backend=alsa_seq
cpu_cores=4

ARGUMENTS="-qis -a $audio_backend -r $samplerate -z $period_size -c $periods -m $midi_backend -o midi.autoconnect=1 -o audio.alsa.device=$alsa_device -o synth.cpu-cores=$cpu_cores"

# Fluidsynth uses /usr/share/soundfonts/default.sf2 by default.
# You can also append a midi file to read as a test :

SOUNDFONT="/usr/share/soundfonts/default.sf2"

This is just a proposition for an easy to tweak conf file. Feel free to derive from it ! With the -s option, you should be able to connect to fluidsynth on its default port with :

nc localhost 9800

The service is a bit strange because at first it simply doesn't say anything. Just type help then enter to get started.