Running A University Linux-based Audio Lab: Part 3 — Software Installation in Detail

In Part 1 and Part 2 of this series I introduced Linux as a viable operating system for a university music and audio programming lab and described in some detail Linux audio configuration for low-latency audio applications.  In this part I will walk through *most* of our post-install script in some detail so you have a better picture of the necessary configuration required. But first…

0. Revisiting Two Things:

Why KDE Neon?

I talked a little about this in part 1 of this series, but I’ll put a few more words here about it. There are tradeoffs with all your base (are belong to us) operating systems. Either they are super stable, but potentially out of date (we had this problem with Debian), or they might be up-to-date or even “edge”, but break things. We had this latter problem with when we attempted to get more recent packages through Debian Sid (the unstable branch.) We also experienced this when testing various Arch Linux distros and repositories in an attempt to both remain up-to-date and reduce the footprint/potentially conflicting packages. In all of the above cases, the amount of time up-front and the maintenance time (Arch broken almost every month) was too much. I am not a Lab Manager (well, unofficially…) I am a Lecturer in computer music. Running the lab is something I do to support the classes I teach and music and audio research in general for the department and the school.

For me, the *least* amount of work consistently was something Ubuntu. Now Ubuntu has lots of different flavors including a multimedia distro called Ubuntu Studio. But through trial and error I settled on Neon (of all KDE Plasma distros) because the most stable and the least prone to breakage. Now to those who know, this might seem amazing given that the whole point of Neon is newness. However, by keeping the core the LTS version of Ubuntu, they have somehow gotten the best of both worlds as far as I’m concerned.

Your ITS Department

In the earlier articles I didn’t dwell on the role of the institution in creating your lab image, but below we can’t avoid it. In reality, you will want (read: need) to work very closely with your IT department to correctly implement your lab image. The reason is that ITS has already implemented a technosystem into which you will be placing your computers. You will need to make sure that your authentication methods, virus mitigation strategies, etc, are in alignment with the existing system and that you are in compliance with university policy. You *don’t* need to completely understand what they give you or require of you, you just need to make sure it works. How will you know? It won’t work if you don’t. (Waa waaaaaa.) An example of this being the case of “correctly” imaged machines that students cannot log into. You will require a good working relationship with someone in ITS to ferret this out. (Hint, it’s sssd.)

Okay, let’s get into it.

1. Management:

I used to use GitHub to manage our scripts and config files, however, some packages have become available only as downloads and the file size is over the 25 MB limit that GitHub imposes. I am working on a permanent replacement, but for now I will use Notion.so which I use for most everything else I do. I work with the department IT support specialist to make sure the ITS side of things is up-to-date and reflects best practices. That piece basically consists of making sure machines (virtual or real) are where they are supposed to be on the internal network so authentication works, share points are available, etc. I incorporate part of the department post-install script that attaches new CS machines onto the domain into the one I maintain along with individual config files for the different pieces of software installed when the script runs after the base OS is installed.

2. The beginning:

If you are using newly purchased hardware be sure your MAC addresses (eth specifically) are registered with your university so you can install updates as you install *to save time later*. You will have to do this eventually anyway to connect to your network. If you are using existing equipment, be sure that the ownership in whatever registry your Uni uses is up-to-date. (Be a team player — plus, there’s a possibility someone in the lab will do dumb dumb things and you want ITS to be able to contact *you* quickly to remediate the problem rather than have them search around and get all mad and threaten to take away your toys.)

After this, you will want to install your base OS. Again, we use KDE Neon so I make a fresh install boot drive using dd and then employ sneaker net to get the image on all the machines. I always update as I install so that’s not a necessary step post-install. Part of this initial install process is to create a local system administrator. Logically, we make the name something non-obvious and hard-to-guess for those attempting to co-opt our network.

3. The Script Part I: the ITS part

The post-install script is one part of the assets we pull down from GitHub Notion.so. The other is a *configs* folder that contains configuration files, program files, source files, etc. This and the install script are downloaded to the Desktop folder of the administrative account on each machine.

The top of the script contains all of the elements necessary to attach the machines to the Yale network. I will not describe that section in detail because your config will be different, but the following are the applications that support these efforts.

First, the script is run with sudo so the commands will not show that below. If you try these out one-by-one to test various things ( 😐 ) you will have to add sudo to each command.

At the very top of the script is the following line that simply creates a variable and assigns it to the desktop for use in the install script.

SPTH='/home/'$(whoami)'/Desktop'

Next we disable IPV6 and update all packages using pkcon refresh && pkcon update.

We then install the packages required for the CS Department integration:

pkcon install -y install krb5-user ssh sssd cron-apt ssmtp mailutils | tee -a ~/post_install_ouput

krb5-user is a package to support remote authentication and file systems.

ssh is for secure, remote shell access to the machines.

sssd is a package to support remote authentication

cron-apt is used to check package status at incremental periods

ssmtp and mailutils are used to email information to the system administrator or other ITS personnel.

And a note on the install command. pkcon is preferred by Neon to ensure package coherence. (It does use apt as the back end, but disables *upgrade* to ensure new packages are always downloaded when available). The -y argument causes an automatic “yes” to any interactive request from the install allowing the script to work with interactive commands. At the end of the install command, after the package list there’s a pipe ( | ) that passes all standard output into the tee program which appends the output to a specific file, in this case our post_install_output file which is, essentially a log file.

We then copy sssd configurations, setup some defaults for the user on first-login (again, using our own config files), configure the login screen to allow network users (it defaults to presenting only local users) by copying a custom 50-unity-greeter.conf file, and last we add department network printers (standard to all department computers.)

4. The Script Part II: the exciting part

Alright, maybe ‘exciting’ isn’t the word, but here’s the music/audio part and the part that is specific to our lab.

First, we change the default group ID for the audio group to match that of the Debian system used by the CS Department. If you don’t do this your users, even when added to the audio group, will not be able to run JACK in realtime mode. (This was a fun one to troubleshoot!)

groupmod --gid 63 audio | tee -a ~/post_install_ouput

Next, we install utilities required to read and write to drives formatted as exfat. This is the recommended format for maximum compatibility when moving files to and from the lab machines from other operating systems.

echo "Now installing exfat utilities"
pkcon install -y exfat-fuse exfat-utils | tee -a ~/post_install_ouput

Next, we use the verbatim installation instructions from the KXStudio maintainer to install the repositories.

echo "Now installing KXStudio Repos"
# Install required dependencies if needed
pkcon install -y apt-transport-https gpgv
# Remove legacy repos
dpkg --purge kxstudio-repos-gcc5
# Download package file
wget https://launchpad.net/~kxstudio-debian/+archive/kxstudio/+files/kxstudio-repos_10.0.3_all.deb
# Install it
dpkg -i kxstudio-repos_10.0.3_all.deb

Here we update our packages and install the desired meta audio packages both from KXStudio and those recommended from Ubuntu. To see what is installed by these packages simply use ‘apt-cache depends PACKAGENAME’. (Note, currently do *not* install the recommended audio package. It has a broken Ardour package that fails.)

pkcon refresh | tee -a ~/post_install_ouput
pkcon install -y kxstudio-meta-audio-applications kxstudio-meta-audio-plugins-collection kxstudio-recommended-audio kxstudio-recommended-audio-plugins | tee -a ~/post_install_ouput

Next we install the low-latency-hwe kernel:

pkcon install -y linux-lowlatency-hwe-20.04 linux-tools-lowlatency-hwe-20.04 | tee -a ~/post_install_ouput

Then we install a few additional applications. QjackCtl and pulseaudio-module-jack are utilities to make working with JACK a little easier and to bridge Pulseaudio to JACK. Pure Data is a visual programming language for audio that is useful for quick prototyping on boards like the Bela Board.

pkcon install -y puredata-core qjackctl pulseaudio-module-jack | tee -a ~/post_install_ouput

The next chunk of code is all to build and install SuperCollider and it’s associated libraries from source. This is not interesting, but I’ll put it here anyway.

# INSTALL DEPENDENCIES FOR SUPERCOLLIDER AND BUILD
echo "Now building SuperCollider" | tee -a ~/post_install_output

pkcon install -y build-essential libjack-jackd2-dev libudev-dev libsndfile1-dev libasound2-dev libavahi-client-dev libicu-dev libreadline-dev libfftw3-dev libxt-dev libcwiid-dev cmake subversion git qt5-default qt5-qmake qttools5-dev qttools5-dev-tools qtdeclarative5-dev libqt5webkit5-dev libqt5websockets5-dev libqt5svg5-dev libqt5webengine5 qtwebengine5-dev qtpositioning5-dev libqt5sensors5-dev | tee -a ~/post_install_output

mkdir sc_src
cd sc_src
git clone --recursive git://github.com/supercollider/supercollider.git | tee -a ~/post_install_output
cd supercollider
git submodule init && git submodule update
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DSC_ABLETON_LINK=OFF DSC_ED=OFF -DSC_EL=OFF -DSC_VIM=OFF -DSC_IDE=ON -DNATIVE=ON .. | tee -a ~/post_install_ouput
make

# INSTALL SUPERCOLLIDER
echo "Now installing SuperCollider" | tee -a ~/post_install_output

make install | tee -a ~/post_install_output
ldconfig
# apt remove -y appmenu-qt5

# INSTALL SUPERCOLLIDER PLUGINS
echo "Now building SuperCollider plugins" | tee -a ~/post_install_output

cd $SPTH/sc_src
mkdir /usr/local/share/SuperCollider/Extensions
git clone --recursive https://github.com/supercollider/sc3-plugins.git | tee -a ~/post_install_output
cd sc3-plugins
mkdir build
cd build
cmake -DSC_PATH=../../supercollider .. | tee -a ~/post_install_output
make
make install | tee -a ~/post_install_output

Next, we install a few applications that we must download as-is or that are not available in repositories. Ardour is our DAW of choice. It’s a bit like flying an airplane, it takes a long time to learn to do it, but once you do you can go anywhere. The LSP plugins are a series of *excellent* digital signal processing plugins that I use in my introductory computer music and production class. Oxe and Surge are formerly-commercial, but now open-source synthesis instrument plugins. Both are excellent. Fscape is a signal processing application for more experimental music.

# INSTALL ARDOUR
echo "Now installing Ardour 6.9" | tee -a ~/post_install_output
cd $SPTH
chmod +x configs/Ardour-6.9.0-x86_64-gcc5.run
cd configs/ && ./Ardour-6.9.0-x86_64-gcc5.run

# INSTALL LSP-PLUGINS (latest as of 01.29.18): update to change version number appropriately
echo "Now copying LSP plugins to /usr/lib/lv2" | tee -a ~/post_install_output
cd $SPTH
unzip configs/lsp-plugins-lv2-1.1.31-Linux-x86_64.zip
cp -r configs/lsp-plugins-lv2-1.1.31-Linux-x86_64/lsp-plugins.lv2/ /usr/lib/lv2 | tee -a ~/post_install_output
chmod 755 -R /usr/lib/lv2/lsp-plugins.lv2/ | tee -a ~/post_install_ouput

echo "Now copying OxeFM to /usr/lib/vst" | tee -a ~/post_install_ouput
cd $SPTH/configs
cp oxevst* /usr/lib/vst/ | tee -a ~/post_install_output
cd skin
mkdir /usr/lib/vst/skin | tee -a ~/post_install_output
cp * /usr/lib/vst/skin | tee -a ~/post_install_output

# INSTALL SURGE SYNTH
echo "Now installing Surge Synth and dependencies" | tee -a ~/post_install_output
pkcon install xclip
cd $SPTH/configs
dpkg -i surge-linux-x64-1.9.0.deb | tee -a ~/post_install_output

# INSTALL FSCAPE
echo "Now installing FScape from .deb file" | tee -a ~/post_install_output

cd $SPTH/configs
dpkg -i FScape_1.5.1_all.deb | tee -a ~/post_install_output

The last thing the script does is make sure the administrative user has the appropriate privileges.

usermod -aG audio,plugdev,dialout notOurRealAdminAccountName | tee -a ~/post_install_output

Network users (students, faculty, etc.) who are authenticated through the department LDAP have their permissions set there.

5. Staying up-to-date

Above I mentioned that cron-apt is used to check package status (daily) and use the email packages to email package status to the system admin (me). This, however, is not configured to auto update anything. To date we have followed the model of preferring stability to security. That means that the systems aren’t updated more than once or twice per month and then are done manually first on a live-swap machine to make sure nothing major breaks, and then on all the lab machines via ssh. KDE Neon has proven very reliable with updates and, to date, has not resulted in workflow interruption.

6. The idle/logout problem

If you’ve used a lab or run one, you know this problem well. What do you do when lab users are idle for long periods of time (left to get lunch…) or do not log out of their workstation? This might be simple as your IT department may require you to implement some specific logout strategy. If they do not, you have the following choices:

1. Do nothing. The desktop stays up and running. This is required for long-run batch processing or video editing, etc, which may take hours. It’s also a security risk as anyone can access that users files.

2. Lock the screen. Choose a time – five, ten, fifteen minutes – and lock the screen requiring the user to enter their passphrase to access the working desktop. Do not auto-logout. This lets processes keep running, but keeps others from tampering with the desktop. This is a problem because the user may never come back (they simply forgot to log out) and the next would-be user of the workstation cannot use it. (Some Linux distros will allow you to login as a different user, leaving the other user logged-in as well, but this breaks some things. Specifically, the previous user will own realtime JACK audio priority on the machine and the new user will not be able to use it, rendering the station useless.) This means the new user has to restart the computer before using it.

3. Lock the screen after x minutes and log the user out after y minutes. This provides some “grace period” for the user, but ultimately will log out the user, killing their processes. This is good for the next user, but potentially bad if the current user has unsaved work that they will lose.

Because of the nature of the work done in our lab, we go with strategy 2. The system will secure the users instance when it locks the screen (after 15 minutes), but does not kill processes or log the user out. This means, occasionally, that the next user will have to restart the machine if someone leaves without logging out. That’s okay. It takes under a minute and, believe it or not, most systems work better after a restart anyway as any hung or crashed system services get rebooted as well.

Conclusion

Regardless of how you choose to deploy and run your lab, the time-versus-money problem will mostly be what determines how you proceed — that and your general level of computer and IT expertise. You can always do things manually rather than script them. Manual is easier, but takes more time. If you have staff, however, this might be the way to go. In both computer labs I’ve managed at Yale, student workers helped with installation, authorization, and maintenance routines. It’s a win-win: I didn’t have to do everything, especially the tedious stuff, and the students got paid.

The reality is that, in most cases, there’s no absolute right way to proceed (unless it’s configuring your authentication processes). For example, something we do *not* do are backups. This may seem insane, but the configuration and deployment is already mostly automated and we have hot-swappable backups, so a machine can simply be swapped out if the drive gets fragged. This means we do not have to support backup procedures or purchase and manage the backup hardware. We are also *not responsible for student work*. As soon as you do backups, you are responsible for everything. This is very unpleasant. We make it clear to all users that there are no backups and that backing up their files is their personal responsibility. In my 14 years of doing this we have never had a catastrophic failure that cost students work. (Knocks on wood and apologizes to the computer gods!) We replace our machines every three to four years so the odds of a HD failure after the initial break-in period (where you typically find out if you have a bad drive) are low. This means when projects come due and I get the 1-2 emails from students saying their dog ate their homework that their projects became corrupt somehow, I simply give them a day or two extension and go on with my life (rather than then having to dig through backup archives to find what may or may not be a working copy of a project file.)

I hope you have found this three part series informative. I already have some ideas for short posts on the subject that seem out of scope here. Click the subscribe button if you want to be alerted when those get posted, and thanks for reading.