cTune v1.0: Part 2 - Backstage
Table of Contents
ctune
is Linux based internet radio stream player for the console entirely written in C. It uses the RadioBrowser API as a source for searching streams and getting station information.
This is part 2 of 3 blog posts detailing the process in implementing cTune's backend.
All UML diagrams are made with PlantUML.
1. Skeleton implementation
1.1 Bare-bone Logger
The first priority was the Logger. To start with, a complete API was defined. Even if the preliminary implementation was just made up mostly of stubs and forwarded log message to the terminal output, it allowed for the rest of cTune to make calls to it as if completely implemented.
The LOG(..)
call in the interface is actually a MACRO to either log(..)
or logDBG(..)
depending on whether the build has the 'DEBUG' flag enabled or not. The only difference is that the debug version adds the file name and line number to the log message.
The log level ENUM's integer values map to their RFC.5424 equivalents (CTUNE_LOG_TRACE
being the exception). This way they can be forwarded as-is to potential Syslog calls in a failure scenario like, for example, when the output file cannot be open.
1.2 Iteration #1
The first iteration of cTune's "skinny" pipeline comprised of:
- Fetching/Decoding a random stream (from a hardcoded url) with ffmpeg into PCM data and
- Sending the PCM data to the system's sound server (via SDL2 to keep things simple).
The program's main.c
source file was used as a make-shift Controller during the "skinny" pipeline's implementation.
Getting ffmpeg to output PCM data from a stream involves 5 distinct stages:
- Open input (source) stream and get its information,
- Get and open a matching codec for the source stream in order to decode the compressed/coded source audio,
- Setup resampling to convert the decoded source audio into mixed PCM data,
- Setup the audio sink (a.k.a. the output)
- Decode and resample the frames and send to output sink as they are received
Although ffmpeg does have the ability to output to different sound servers baked into its library, I absconded from using this feature as it would have made output selection problematic if other 'player' plugins did not have such feature included.
As for the audio output, there are just 2 stages:
- Initialising the audio server/library with the output format specifications (output format, sample rate, number of channels, sample size), and
- Write PCM data to the output.
To connect the two, the output's write(..)
method is just called in ffmpeg's 'loop' in stage 5.
1.3 Iteration #2
With that done, the stream was able to play to the speakers so the next step was to insert the pipeline's RadioBrowser component and its dependants to fetch available radio streams from the RadioBrowser API.
The API documentation described 4 steps to get that data:
- "Get a list of available servers" via a reverse DNS lookup,
- "Randomize the server list" and pick one,
- Send a query as a GET request to server,
- Receive the data (JSON format)
1.3.1 Network IO
Networking IO functionality needed to be implemented prior to making any requests to the RadioBrowser API. I went with the well known 'Berkley sockets' library since it comes as standard on most linux distributions and, by looking at the man
pages, it can do both DNS lookup (for the server info) and inverse lookup (for the server address). Plain HTTP requests[1] are also doable but not HTTPS. For that, the OpenSSL library is required on top and, it too, comes as standard in most Linux systems. As for randomising the list of servers; it's just a matter of shuffling the order of the items in the list.
[1] HTTPS is actually mandatory; there is a line (when i looked) in the Radio Browser API documentation that states: "Now you have a list of multiple names of servers which you can use to directly connect to with HTTP and preferably HTTPS.". That's not quite true as it turns out. Doing a request in plain HTTP returns some HTML that advises to use HTTPS for the request. So, essentially, one can only get the required data (the JSON) via HTTPS.
1.3.2 RadioBrowser API
To create a functional API for the RadioBrowser component, studying of the remote RadioBrowser API was required. Based on that, a draft of the component's API was created matching both mandatory and optional cTune requirements. The mandatory ones are:
- Get radio stations based on a filter
- Get radio stations based on a category type (
ByCategory_e
) - Get a list of available sub-categories within a given list category (
ListCategory_e
) - Increase the click counter for a radio station
The rest are "nice to have but not absolutely needed" (i.e.: optional - see fig.5).
To pack all that information received from Radio Browser, some objects and a data-structure were created.
RadioStationInfo_t
- DTO container for information of a radio station stream (name, url, country, etc...).
RadioStationFilter_t
- DTO container for the filter arguments sent with a search query to the RadioBrowser API.
CategoryItem_t
- DTO container for a category's description (name, station count and, optionally, state).
ClickCounter_t
- DTO container for the returned values after a click event is sent to the RadioBrowser API.
ServerStats_t
- DTO container for the RadioBrowser remote server stats.
ServerConfig_t
- DTO container for the RadioBrowser remote server configuration information.
RadioStationVote_t
- DTO container for the returned values after a voting event is sent to the RadioBrowser API.
NewRadioStation_t
- DTO container for the radio stream info and returned values for sending a new radio station to the RadioBrowser API.
Vector_t
- Dynamically resizable contiguous array data-structure.
1.3.3 DTO Namespaces
CTune's Data Transfer Objects have within their scopes certain functionalities that need to be accessed, in parts, in multiple places within the application. The most basic of these functionalities include, but are not limited to, initialisation and de-allocation (freeing). Because of this, it makes more sense to have these functionalities in a dedicated namespace for each of the DTOs.
Using RadioStationFilter as an example: in addition to init, deep copy freeing, getter and setter functions, there also a parameterization method. That function takes the filter's private variables and generates from them an HTML GET
query string.
E.g.: the parameterized fields string below is: ?tag=jazz&order=bitrate&reverse=true&limit=1000
GET /json/stations/search?tag=jazz&order=bitrate&reverse=true&limit=1000
User-Agent: ctune/1.0.0
Host: https://fr1.api.radio-browser.info
Content-type: application/json; charset=utf-8
Another DTO of note, RadioStationInfo, is a central piece in the software and so requires a few more functionalities than most.
Aside from comparison functions (equivalence and <
,==
,>
comparators), there's the hashing function. The latter being needed to create HashMap index keys for favourite RadioStationInfo_t
(that comes later). The FNV-1 hash algorithm is used here to create hashes from the radio station UUIDs.
By keeping methods that only act on a DTO to said DTO's namespace it is easier to refactor these along with any variables inside the DTO itself. One of the challenges in software development is knowing how to group functionalities and variables in a semantically meaningful way which is not always straightforward.
In C, there are no classes like in C++. Meaning that, in order to somewhat mimic the whole shebang, struct
s can be used in both objects and namespace capacities. The trick involves binding methods to function pointers inside a const struct
like so:
typedef struct MyObject {
int a;
int b;
} MyObject_t;
struct MyObject_Namespace {
MyObject_t (* init)( int a, int b );
MyObject_t (* add)( const MyObject_t * lhs, const MyObject_t * rhs );
} MyObject;
#include MyObject.h
static MyObject_t MyObject_init( int a, int b ) {
return (MyObject_t) {
.a = a,
.b = b,
};
}
static MyObject_t MyObject_add( const MyObject_t * lhs, const MyObject_t * rhs ) {
return (MyObject_t) {
.a = ( lhs->a + rhs->a ),
.b = ( lhs->b + rhs->b ),
};
}
const struct MyObject_Namespace MyObject = {
.init = &MyObject_init,
.add = &MyObject_add,
};
Usage is just a matter of doing "MyObject_t o = MyObject.init( 1, 3 );
" for example.
1.3.4 JSON data extraction
Once the DTOs were done, the next item on the work list was to do the packing of the JSON data into the matching DTOs.
Parsing data can be precarious and, thus, needs a fair amount of error control in the implementation so that, when it fails, it does so gracefully. There are 4 distinct possible failure points:
json-c
library parse failure,- unexpected json object,
- unrecognised key in json object, and
- cast failure from json object value to DTO variable type.
Appropriate action and logging for these needed to be taken care of in each json-to-DTO parsing method implementations. Same goes for the inverse parsing of RadioStationInfo_t
DTOs to a JSON array which is used for exporting favourites to a JSON file. Informative logging becomes key to troubleshooting the issue promptly and successfully.
2. Growing the implementation
At this point, the basic pipeline worked - a random station could now be fetched and it's stream played to the speakers. The next steps involved:
- growing the skinny pipeline implementation,
- refactoring/developing to match the full version of the component design,
- adding the peripheral components and connecting them to the rest.
2.1 Error numbers
C has its own errno
system (errno.h). So do some of the external libraries used. The aim, for cTune, was to catch these errno
at the points of failure, make a record of them in the log (inc. the string representation) and then either return a cTune specific error number from the function or set it directly using ctune_err.set(..)
.
Error numbers in cTune are grouped into categories which are, in turn, each given an offset so that categories could be further expanded when/where needed.
MACRO | Offset | Description |
---|---|---|
CTUNE_ERR_NONE |
0 | No error |
- | 1 | Generic errors (alloc, overflow, cast, etc...) |
CTUNE_ERR_LOG |
10 | Logger-specific errors |
CTUNE_ERR_IO |
20 | IO errors |
CTUNE_ERR_THREAD |
30 | Threading errors |
CTUNE_ERR_NETWORK_IO |
100 | Networking errors |
CTUNE_ERR_RADIO_BROWSER_API |
200 | 'Radio Browser' web API errors |
CTUNE_ERR_PARSE |
300 | Parsing errors |
CTUNE_ERR_PLAYER |
400 | Player plugin errors |
CTUNE_ERR_AUDIO_OUT |
500 | Audio output plugin errors |
CTUNE_ERR_UI |
600 | UI errors |
CTUNE_ERR_ACTION |
700 | Failed UI action errors |
Setting an error had to be a globally accessible feature in cTune. Because there would be more than 1 thread running, the set/get methods needed to be thread-safe. In addition, in the case where quick successions of errors are set from various parts of the running application, each need to be logged as they come in. Side note: a 'print' callback was added to cover the case where the error description needed to be printed somewhere else like in the UI for example.
2.2 Logger
Although only outputting to the terminal is less labour intensive it doesn't help in troubleshooting when the output is larger than the terminal's buffer. It's better to funnel everything to a file for convenience's sake especially as it can be saved for later viewing. Based on that, file output was implemented for the logger.
Here are the steps involved in logging message(s) in the final implementation:
- The Logger receives a message, unpacks and formats it into a ready-to-print string and adds it to the LogQueue,
- On a 'enqueue' event, the LogQueue uses a callback method (
resume()
) to signal the LogWriter that there is/are message(s) ready, - On the
resume()
signal, the LogWriter wakes up if sleeping and resets the timer on the timeout, - While the timeout has not reached 0 and there are items in the LogQueue, the LogWriter dequeues messages one-by-one and writes them out to the file output.
- Once all messages have been dequeued/processed and the timeout has reached 0 then the LogWriter idles back to sleep until the next wake-up call from
resume()
.
At first the resume()
callback didn't exist and the worker thread in the LogWriter continuously checked for message to write out. This presented a problem as it essentially kept the CPU pegged on useless calls to the LogQueue even when there were no messages left to dequeue. Fortunately there's a way to "idle" a thread and wake it up on command. Using that feature along with a timeout system and a callback for signalling new messages in the queue eliminated that issue entirely. In the end, even though the abstract design was sound, the implementation took a couple of attempts to get right.
2.3 Configuration
This component is responsible for saving and loading state between sessions. It includes operator-defined configuration variables, volume and queued station to be set on cTune's start and bookmarked favorites previously saved (ctune.cfg and ctune.fav).
Loading can be done by a call to the category's load method (favs.loadFavourites()
for favourites and cfg.loadCfg()
for the configuration). Everything is then accessible and assignable thorough getters and setters. At the end, to save any modifications the write-out methods can be used (favs.saveFavourites()
for favourites and cfg.writeCfg()
for the configuration). Currently, the target files are just overwritten with whatever is set in the configuration instead of just being edited.
It is also where the UI settings can be loaded/saved between sessions.
2.4 Plugin system
The plugins system was the next area of focus but before diving in, the current ffmpeg player implementation was inside the RadioPlayer component so needed to be refactored into its own self-titled component (see fig.11).
Since the player calls on the audio output's write function, write( const void * buffer, int buff_size )
, whatever audio output plugin is being used would need to be injected into the player.
To keep the plugin loading process away from the player and sound output, RadioPlayer's interface has 2 methods included for setting these: loadPlayerPlugin(..)
and loadSoundServerPlugin(..)
. RadioPlayer is now responsible for initialising the player with the output plugin and managing the player thread. Once the plugins are loaded everything is nicely encapsulated and playing a stream is just a matter of using the playRadioStream(..)
and stopPlayback(..)
functions.
All there was left to do was to make the Plugin component to deal with all the loading and unloading minutia for both the chosen player and output plugins.
Since Plugin was to have its API walled off behind the Settings component, pass-through methods were added to the Settings API. These load the plugins based on the choices fetched from the configuration file (or a default if that failed). Settings is also responsible with the unloading and cleanup when it is itself freed during cTune's shutdown phase.
dlfcn.h
is used to dynamically load the plugins. There are 4 functions that are involved in this:
- Opening the target plugin and getting its handle using
dlopen(..)
, - Getting the run-time address(es) of the plugin methods/objects with
dlsym(..)
, - Closing the handle of the plugin using
dlclose(..)
, - Checking error state of last made function calls with
dlerror()
.
2.5 Controller
The Controller was created to be the center piece of cTune acting as a master API. It abstracts complex actions into single-call functions and acts as a gateway interface to which the UI can latch on and use.
It is itself bootstrapped by the driver (i.e.: main.c
). which initialises it, loads any relevant command line options, and attaches the UI callbacks to it during cTune's init stage. After that, it can be used as the sole point of interaction to the backend for the attached UI.
2.6 CLI
The CLI component's purpose was to separate the parsing and processing of the command line arguments away from main.c
so as to make the latter cleaner. It takes in the arguments, checks their validity, parses them into ArgOption_t
objects and, in turn, insert them into an "actionable" list ready to be processed. main.c
Based on what options were parsed, the CLI processor returns the state where main.c
can either:
- Exit with an error,
- Exit without an error,
- Continue to launch (no actionable args found),
- Process actionable arguments and then continue launching.
2.7 Playback logger
In order to keep some sort of record of what was played during a session, the PlaybackLog was added as a "last-minute" feature. All it does is output to a file the radio station details when a stream begins playing and any song title plus the local timestamp when the stream's meta-data changes.
By default, the playback log file (playlog.txt
) is overwritten when cTune starts.
2.8 Plugins
Having ffmpeg and SDL2 already, the remainder of the planned output plugins (ALSA and PulseAudio) were added to the roster.
In addition to the planned plugins, VLC was added as a player and sndio as a sound output too.
The main issue when using AV libraries for implementing both player and output plugins came down to the technical language. Some terms were used interchangeably to describe particular properties in various implementations. This, when coming from a zero-knowledge perspective, created a some degree of confusion for me. Most documentation (when it exist and is up-to-date!) tends to come from a position of familiarity and in-depth knowledge and, thus, is not always beginner friendly I found.
2.8.1 ALSA
Implementing a plugin for ALSA wasn't too challenging once relevant example code snippets were found.
Though, there were some issues getting the PCM data sent to the audio server. ALSA's snd_pcm_writei(_snd_pcm *pcm, const void *buffer, snd_pcm_uframes_t size)
method requires a frame count instead of the usual 'frame size'/'PCM buffer size' encountered in other used output libraries. The solution was to simply divide the incoming frame size with the byte size value of a single frame. To do that the snd_pcm_frames_to_bytes(..)
is called during initialisation to convert 1 frame to its byte size. The value is then cached so it can be used for the divide operation in the write(..)
method's implementation.
The one particularity about this sound engine is that there are really just 2 possible volume sliders to control: "PCM" and "Master". A cTune-specific volume control on the ALSA mixer doesn't exist meaning that when the volume is modified, it is changed globally.
2.8.2 PulseAudio
PulseAudio is ...complicated but once one has gone through the pain of setting it up and having something play through things become a little clearer. Nevertheless, the learning curve starts pretty steep. This is mostly due to the extended API it offers.
For cTune, most of the PulseAudio API is not actually required since features such as seeking and pause don't serve much purpose for internet radio streams. Any callbacks to deal with state changes for the context or stream is just used as an opportunity to log more information in case something goes sideways. The only situation where the callback does anything of note is for the stream overflow; the PulseAudio stream is flushed as an attempt to recover.
The rest of the implementation is mostly for setting up the main loop, connecting a stream to it and then writing the PCM data to it as it is received from whatever player plugin is used.
2.8.3 SNDIO
The sndio
library was an unplanned extra and only made because the API is so superbly simple that it took barely any time at all to get working!
The only downside of this implementation is the lack of volume control support. From what I understood, as a BSD-centric library it doesn't really do this feature outside of that environment.
The only way to get volume control on sndio
and also have one that is cTune-specific in the case of ALSA would be to add a mixer between the player and sound output plugins. Meaning that volume could be adjusted in the PCM data before it is sent to the sound output plugin but that would require revisiting the application's architecture.
2.8.4 VLC
The VLC player plugin was also added as an extra at the end of cTune's development as an alternative to ffmpeg. LibVLC has a well designed API and getting something playing is actually not very difficult:
- create VLC instance,
- create VLC player,
- create media,
- play media in player.
Having said that, there were a couple of issues that came up during the plugin's implementation which caused some headaches...
The first was a bug in the format specifier for VLC's PCM output which, regardless of what was put in, always ended up as 's16l
'. Specifying 's32n
' to make it inline with what was originally hard-coded default in cTune didn't work. The solution was to make the PCM format part of the input variables in the SoundOut output plugin interface (as seen previously in fig.13). This way when ffmpeg is used, it can set the PCM output format to the normal 32 bits and, for VLC, 16 bits and the sound outputs know what to expect.
The second issue revolves around the event callback. When a title changed in the stream's metadata, neither libvlc_MediaPlayerTitleChanged
or any other event got triggered. This means that the first song title gotten from the stream on playback never changed thereafter. The only solution to get this feature working was to make periodical checks and comparisons of the meta-data's title to a stored copy, updating the copy when relevant.
Both issues are present as of LibVLC v3.0.14. This is why the VLC player plugin is considered only as a fallback option for the moment even though it works (mostly).
3. Final state
3.1 Minor addition(s)
There was an somewhat unplanned extra feature added when working on the the UI implementation for stream testing (auto-retrieves the codec and bitrate information too). The problem with that was that its inputted URI needed to be validated as such. For that, libCURL was added as a dependency since it has that feature baked-in.
I'm aware that the libcurl
library could have been used to do all the network IO jobs in NetworkUtils and if I had to do it again I would do just that. So much for trying to keep dependencies to a minimum!
Aside from that, the following was also done:
- minor refactoring of variable names leftover from first trials and experiments to bring them up to the standard convention used,
- insertion of forgotten/missing
static
keywords in front of some methods belonging in "namespace", - miscellaneous cleanup of dead code.
3.2 Overview
The final diagram has a couple of things omitted to keep it clean and more readable:
- globally used interfaces are not connected to everything (Logger and
ctune_err
) - some components are not connected to their spooler parent components (see notes appended in diagram),
- technically the player plugins should all be connected to the AudioOut interface.
When starting implementation I like to first start with a "skinny" version of the main data processing pipeline, expand it, then refactor the working parts into the target design's components.
This way I can check my assumptions as I go and get a working implementation of the most important functionality. As long as playing a stream from a URL and outputting the PCM data to a sound library works, even if in a rough state, the rest can be added in with the assured knowledge that the core MVP is sound. It's a lot easier to trust the original planning and design when music comes out of the speakers!