MythTV Transcoding (3): Transcoding for iPad Playback

I’m doing a series of posts here on automated transcoding of recordings in MythTV. The idea is to explain the basics, then evolve the design to end up with recordings suitable for playback on the iPad, which is a little more advanced.

  • The complete series of articles is available at this page

Last time I showed where you go in MythTV to edit the default job settings, and what happens when you hit the Transcode button in MythWeb or let MythTV transcode automatically after recording.

Today I’m going to continue the series by leaving MythTV to one side, and showing how to create a transcoded recording suitable for playback on an iPad (or other Apple device – iPod, iPhone, Mac OS X, etc). Next time we’ll incorporate this into MythWeb for the final result.

Before that, let’s look at the different ways that playback on Apple platforms might be achieved (more suggestions are welcome, in the comments):

  • Install the MythFrontend application natively onto an OS X device.
  • Install the VLC Media Player onto an OS X device (as it can handle the ASX Stream link in MythWeb) – in fact this is what I do on my own desktop Mac.
  • Install MythPodcaster, a Transcoder and RSS publisher, and subscribe to transcoded recordings in iTunes (leads to syncing them to the iPad).
  • Create a MythTV Job to transcode into H.264 format (compatible with all Apple platforms), and add links to the generated files in MythWeb.

It’s the last of these which I’m going to cover in this article. However an honourable mention must be made of MythPodcaster. First, most of the work here is taken from looking at what that excellent tool does. Second, if you don’t want to mess about editing MythWeb, then MythPodcaster has its own web interface for not only managing transcode schedules and profiles, but also browsing and playback. It really is an excellent tool I recommend you check out.

Each MythTV user job is configured as a command which can be executed at the shell command-line. That means the first thing to do is get transcoding working at the command-line for ourselves. The tool for this is ffmpeg, but we must compile a special version which can output H.264. The following instructions are for an Ubuntu system and based on the MythPodcaster InstallationOnMythBuntu page.

I’m going to build a custom version of ffmeg, separate to that installed with my Ubuntu system. The instructions below aren’t intended to be the cleanest solution, but get the job done.

sudo apt-get install build-essential git-core yasm checkinstall zlib1g-dev pkg-config

And we’ll do this work as the mythtv user in a new directory:

sudo mkdir -p /usr/local/lib/ffmpeg-264
sudo chown mythtv:mythtv /usr/local/lib/ffmpeg-264
sudo su - mythtv
cd /usr/local/lib/ffmpeg-264

Build the faac library:

cd /usr/local/lib/ffmpeg-264
wget http://downloads.sourceforge.net/project/faac/faac-src/faac-1.28/faac-1.28.tar.gz
tar -zxvf faac-1.28.tar.gz
cd faac-1.28
./configure --prefix=/usr/local/lib/ffmpeg-264 --enable-static --disable-shared
# some kind of gcc-compatibility bug which needs fixing
sed -i -e "s|^char \*strcasestr.*|//\0|" common/mp4v2/mpeg4ip.h
make && make install

Build the x264 library:

cd /usr/local/lib/ffmpeg-264
git clone git://git.videolan.org/x264.git
cd x264
./configure --prefix=/usr/local/lib/ffmpeg-264 --enable-static --disable-shared
make && make install

Build the libmp3lame library:

cd /usr/local/lib/ffmpeg-264
wget http://downloads.sourceforge.net/project/lame/lame/3.98.4/lame-3.98.4.tar.gz
tar -zxvf lame-3.98.4.tar.gz
cd lame-3.98.4
./configure --prefix=/usr/local/lib/ffmpeg-264 --enable-static --disable-shared
make && make install

Build ffmpeg:

cd /usr/local/lib/ffmpeg-264
wget http://www.ffmpeg.org/releases/ffmpeg-0.8.3.tar.gz
tar -zxvf ffmpeg-0.8.3.tar.gz
cd ffmpeg-0.8.3
./configure --prefix=/usr/local/lib/ffmpeg-264 --extra-version=static --disable-debug --enable-static --extra-cflags=--static --disable-ffplay --disable-ffserver --disable-doc --enable-gpl --enable-pthreads --enable-postproc --enable-gray --enable-runtime-cpudetect --enable-libfaac --enable-libmp3lame --enable-libx264 --enable-nonfree --enable-version3 --disable-devices --extra-ldflags=-L/usr/local/lib/ffmpeg-264/lib --extra-cflags=-I/usr/local/lib/ffmpeg-264/include
make && make install

To make life a little easier, I link to this new binary from a more memorable location:

exit # if you are still the "mythtv" user
sudo ln -s /usr/local/lib/ffmpeg-264/bin/ffmpeg /usr/local/bin/ffmpeg-264

Now we need a magic incantation for ffmpeg-264 which will result in an H.264 file suitable for iPad playback. A good source of possible command line options is the MythPodcaster trancoding profiles XML file. In this file, the various iPad profiles, as their names suggest, will ask for various formats and qualities of output (see each encoderArguments key).

These command line options in turn refer to ffmpeg presets (via -vpre) which should be shipped with ffmpeg in the ffpresets directory. Some of these might work, but I’m pretty happy with two presets from an older version of ffmpeg:

mkdir ~mythtv/.ffmpeg
cd ~mythtv/.ffmpeg
wget https://raw.github.com/FFmpeg/FFmpeg/release/0.6/ffpresets/libx264-main.ffpreset
wget https://raw.github.com/FFmpeg/FFmpeg/release/0.6/ffpresets/libx264-medium.ffpreset

One problem you might have with ffmpeg options is selection of an audio track in recordings with multiple soundtracks (for example normal, and audio-descriptive). There’s no easy way to know which you want (and in my settings I force the choice). Here’s what I settled on, which seems to work OK for SD content, and if you have others please add a comment, below:

-i <infile> -y -map 0:0 -map 0:1 -er 4 -f ipod -acodec libfaac -ac 2 -ab 128k \
    -ar 44100 -deinterlace -vcodec libx264 -vpre medium -vpre main -b 1200k \
    -bt 1200k -maxrate 1200k -bufsize 1200k -level 30 -r 30 -g 90 -async 2 \
    -threads 0 -v 0 <outfile>

These parameters are wrapped up in a script, making things easier for MythTV configuration. This file, called h264xcode is also placed in /usr/local/bin (and obviously, season this to taste, for your system’s recording directories):

#!/bin/bash

CHANID=$1
STARTTIME=$2

INDIR="/mnt/mythtv-video/Videos/MythTV"
OUTDIR="/mnt/mythtv-video/Videos/iPad"

PROG="time nice -n 19 /usr/local/bin/ffmpeg-264"
PARAMS="-y -map 0:0 -map 0:1 -er 4 -f ipod -acodec libfaac -ac 2 -ab 128k -ar 44100 -deinterlace -vcodec libx264 -vpre medium -vpre main -b 1200k -bt 1200k -maxrate 1200k -bufsize 1200k -level 30 -r 30 -g 90 -async 2 -threads 0 -v 0"

LOG="/var/log/mythtv/h264xcode.log"
OUTFILE="${OUTDIR}/${CHANID}_${STARTTIME}"

COMMAND="${PROG} -i ${INDIR}/${CHANID}_${STARTTIME}.mpg ${PARAMS} ${OUTFILE}.m4v.tmp"

echo "h264xcode: ${COMMAND}" >> $LOG
$COMMAND >> $LOG 2>&1
mv -f ${OUTFILE}.m4v.tmp ${OUTFILE}.m4v >> $LOG 2>&1
rm -f ${OUTFILE}.m4v.tmp >> $LOG 2>&1

Let’s work through what’s going on, here. The script takes two arguments – the channel ID and the start time of the recording (we’ll see how MythTV passes these, later). From them it’s possible to locate the original recording file on disk, using the $INDIR variable.

Next a simple log is set up, and the proposed ffmpeg-264 command echoed to that file. Finally the command itself is run, with a low system priority (nice -n 19), and the output directed to the log file. The H.264 file is saved in $OUTDIR.

So far so good. All that remains is to let MythTV know about this new user job command, which we’ll cover in the next article in this series.

Many thanks to Murray Cox, Seb Jacob, and Colin Morey for providing valuable feedback on this post!
Posted in mythtv, transcoding | 5 Comments

Is it silly that tmux is fun?

No, I don’t think it’s a bad thing to get a zing of excitement when you find a new tool that improves your life. Maybe you know what I mean – that feeling of happiness at saving time, remembering more easily how to do things, and satisfaction with a new workflow.

Recently I migrated from the venerable screen, to tmux, and whilst it’s one of those changes where the old tool had no real show-stopping problems, tmux immediately feels much cleaner and well thought-through.

I’ll leave you to read the docs and list of features yourself, but please do check this tool out if you’re an avid screen user. I’ve already got many more tmux sessions/windows/panes open than ever felt comfortable with screen, saving me a lot of time and effort when working remotely.

Posted in devops, linux, productivity | 1 Comment

Smokeping+lighttpd+TCPPing on Debian/Ubuntu

Some notes on getting Smokeping to work on Debian/Ubuntu using the lighttpd web server, and the TCPPing check.

Install the lighttpd package first, as then the subsequent smokeping package installation will notice that it doesn’t require the Apache web server. However, Smokeping doesn’t auto-configure for lighttpd so a couple of commands are necessary:

# lighttpd-enable-mod cgi
# /etc/init.d/lighttpd force-reload
# ln -s /usr/share/smokeping/www /var/www/smokeping

Visiting your web server’s base url should show a lighttpd help page, and visiting the /cgi-bin/smokeping.cgi path should show the Smokeping home page with logo images working.

Install the TCPPing script by downloading from http://www.vdberg.org/~richard/tcpping and saving to somewhere like /usr/local/bin/tcpping (setting execute bit, also). Obviously, use this path in your Smokeping Probe configuration:

+ TCPPing

binary = /usr/local/bin/tcpping
forks = 10
offset = random
# can be overridden in Targets
pings = 5
port = 21

For the TCPPing check, make sure you have the standalone tcptraceroute package installed. You might find an existing /usr/sbin/tcptraceroute command is available, but this is from the traceroute package and won’t work with the TCPPing script.

Posted in devops, monitoring, networking | 1 Comment

The Limoncelli Test

Over at the excellent Everything Sysadmin blog is a simple test which can be applied to your Sysadmin team to assess its productivity and quality of service. It’s quite straightforward – just 32 things a good quality team ought to be doing, with a few identified as must-have items.

Of course I’m not going to say anything about my current workplace but thought it would be interesting to assess my previous team as of October 2010 when I left. I’m incredibly proud of the work we did and both our efficiency and effectiveness in delivering services with limited resources. That’s reflected in the score of (drumroll…) 31 out of 32!

If you have a Sysadmin team, or work in one, why not quickly run through the test for yourself?

Posted in devops, linux, monitoring | Comments Off

local::libs for Dist Development

Most of my distributions are on GitHub and built using Dist::Zilla. As the dependencies of each vary widely and I don’t want to muck up my workstation’s libraries, I set up a local::lib for each distribution’s development.

The App::local::lib::helper scripts make this really easy. As per the docs, I combine the helper with App::cpanminus (cpanm) for all installation.

To bootstrap a new local::lib area, I wrote this simple shell script:

#!/bin/bash
# script named "new-ll"

if [ -z $1 ]
  then
    echo 'pass the distribution name, please'
    exit
fi

echo "creating local::lib for $1 ..."
sleep 3

curl -L http://cpanmin.us/ | perl - --notest --quiet --local-lib \
    ~/perl5/$1 \
    App::cpanminus \
    Dist::Zilla \
    App::local::lib::helper

.
Entering the correct environment for a distribution uses another helper script:

#!/bin/bash
# script named "go"

if [ -z $1 ]
  then
    echo 'pass the distribution name, please'
    exit
fi

~/perl5/$1/bin/localenv bash

.
Which means my workflow for a new distribution is:

$ new-ll New-Dist-Name
$ go New-Dist-Name

.
Any Perl distributions installed in that shell (for example from dzil authordeps | cpanm or dzil listdeps | cpanm) will be placed into the new local::lib. It’s a simple ^D to exit.

However it’s not obvious that you’re within this special environment, so editing Bash’s $PS1 variable (the shell prompt) to include the following, can help:

echo $PERL5LIB | cut -d'/' -f5

.
My deep thanks to the authors of the distributions used to create this neat setup.

Posted in perl | Comments Off

MailHide WP plugin reconfigured

I’ve just disabled the MailHide WordPress plugin, for content on this blog. I realised it was mangling example code in my posts. Sorry about that!

Posted in blogging | Comments Off

Migrate SourceForge CVS repository to git

Updated to include promoting and pushing tags.

I recently had need to migrate some SourceForge CVS repositories to git. I’ll admit I’m no git expert, so Googled around for advice on the process. What I ended up doing was sufficiently distinct from any other guide that I feel it worth recording the process, here.

The SourceForge wiki page on git is a good start. It explains that you should log into the Project’s Admin page, go to Features, and tick to enable git. Although it’s not made clear, there’s no problem having both CVS and git enabled concurrently.

Enabling git for the first time will initialize a bare git repository for your project. You can have multiple repositories; the first is named the same as the project itself. If you screw things up, it’s OK to delete the repository (via an SSH login) and initialize a new one.

Just like the SourceForge documentation, I’ll use USERNAME, PROJECTNAME and REPONAME within commands. As just mentioned, the initial configuration is that the latter two are equal, until you progress to additional git repositories.

Let’s begin by grabbing a copy of the CVS repository with complete history, using the rsync utility. When you rsync, there will be a directory containing CVSROOT (which can be ignored) and one subdirectory per module:

mkdir cvs && cd cvs
rsync -av rsync://PROJECTNAME.cvs.sourceforge.net/cvsroot/PROJECTNAME/* .

Grab the latest cvs2git code and copy the default options file. Change the run_options.set_project setting to point to your project’s module subdirectory:

svn export --username=guest http://cvs2svn.tigris.org/svn/cvs2svn/trunk cvs2svn-trunk
cp cvs2svn-trunk/cvs2git-example.options cvs2git.options
vi cvs2git.options
# edit the string after run_options.set_project, to mention cvs/PROJECTNAME

Also in the options file, set the committer name mappings in the author_transforms settings. This is needed because CVS logs only show usernames but git commit logs show human name and email – a mapping can be used during import to create a sensible git history.

vi cvs2git.options
# read the comments above author_transforms and make changes

But how do you know what CVS usernames need mapping? One solution is to run through this export and git import without a mapping, then run git shortlog -se to dump the commiters. Blow the new git repo away, and re-import after configuring cvs2git author_transforms.

The cvs2git utility works by generating the input files used by git’s fast-import command:

cvs2svn-trunk/cvs2git --options=cvs2git.options --fallback-encoding utf-8
git clone ssh://USERNAME@PROJECTNAME.git.sourceforge.net/gitroot/PROJECTNAME/REPONAME
cd REPONAME
cat ../cvs2svn-tmp/git-{blob,dump}.dat | git fast-import
git reset --hard

At this point, if you’re going to continue using this new git repository for work, remember to set your user.name, user.email and color.ui options.

Now you’re ready to push the repo back to SourceForge. I did test myself that disabling so-called developer access to the repo in the SourceForge Project Member settings page does in fact prevent write access, as expected.

git push origin master

If you had tags on the CVS repo (git tag -l), they’ll have been imported as lightweight tags. Best practice is always to use annotated tags, so this short script will promote them for you:

git config user.name "Firstname Lastname"
git config user.email "me@example.com"
git tag -l | while read ver;
  do git checkout $ver;
  git tag -d $ver;
  GIT_COMMITTER_DATE="$(g show --format=%aD | head -1)" git tag -a $ver -m "prep for $ver release";
  done
git checkout master

Verify the tags are as you want, using something like:

git tag -l | while read tag; do git show $tag | head -3; echo; done

And then push them to the repository with:

git push --tags

Something you might want to do is set a post-commit email hook. For this you SSH to SourceForge, and if you have multiple projects remember to connect to the right one!

ssh -t USER,PROJECT@shell.sourceforge.net create
cd /home/scm_git/P/PR/PROJECTNAME

Download the post-receive-email script and place it in the hooks subdirectory; make it executable. Also set the permissions to have group-write, so your project colleagues can alter it if required. Set the necessary git options to allow the script to email someone after a commit. Season to taste.

curl -L http://tinyurl.com/git-post-commit-email > hooks/post-receive
chmod +x hooks/post-receive
chmod g+w hooks/post-receive
git config hooks.emailprefix "[git push]"
git config hooks.emailmaxlines 500
git config hooks.envelopesender noreply@sourceforge.net
git config hooks.showrev "t=%s; printf 'http://PROJECTNAME.git.sourceforge.net/git/gitweb.cgi?p=PROJECTNAME/REPONAME;a=commitdiff;h=%%s' ; echo;echo; git show -C ; echo"
git config hooks.mailinglist PROJECTNAME-COMMITS@lists.sourceforge.net

Remember to subscribe noreply@sourceforge.net to your announce list, if needed. Finally, set a friendly description on the repository for use by the git web-based repo browser:

echo 'PROJECTNAME git repository' > description

One other thing I did was enable an SSH key on my SourceForge account, as this makes life with SSH-based git much smoother :-) If you have the need to create additional git repositories, or even to replace the one created automatically, then it’s just a case of issuing the git command:

cd /home/scm_git/P/PR/PROJECTNAME
git --git-dir=REPONAME init --shared=all --bare

Good luck with your own migrations, and happy coding!

Posted in devops, git, linux, netdisco | 7 Comments

A Strategy for Opsview Keywords

At my previous employer, and recently at my current one, I’ve been responsible for migration to an Opsview based monitoring system. Opsview is an evolution of Nagios which brings a multitude of benefits. I encourage you to check it out.

Since the 3.11.3 release, keywords have been put front and centre of the system’s administration, so I want to present here what I’ve been working on as a strategy for their configuration. Keywords can support three core parts of the Opsview system:

  1. The Viewport (a traffic-lights status overview)
  2. User access controls (what services/hosts can be acknowledged, etc)
  3. Notifications (what you receive emails about)

Most important…

My first bit of advice is do not ever set the keywords when provisioning a new Host or Service Check. This is because on these screens you can’t see the complete context of keywords, and it’s far too easy to create useless duplication. You should instead associate keywords with hosts and services from the Configuration/Keywords screen.

Naming Convention

Okay, let’s go to that screen now, and talk about our naming convention. Yes, there needs to be one, so that you can look at a keyword in another part of Opsview and have a rough idea what it might be associated with. Here’s the template I use, and some examples:

<type>-[<owner>-]<thing>

device-ups
server-linux
service-smtpmsa
service-nss-ntpstratum3

Let’s say you have a Linux server running an SMTP message submission service and an NTP Stratum 3 service. I would create one keyword for the underlying operating system (CPU, memory, disk, etc), named “server-linux“. I’d create another for the SMTP service as “service-smtpmsa” and another for the NTP as “service-ntpstratum3“. If your Opsview is shared between a number of teams, it might also be useful to insert the managing team for that service in the name, as I’ve done with NSS, above. The type “device” tends to be reserved for appliances which fulfil one function, so you don’t need to separate out their server/service nature.

With this in place, if the UNIX Systems team manages the server and OS, and another team manages the applications stack on the box, we’ve got keywords for each, allowing easy and fine grained visibility controls. When creating the keywords, you should go into the Objects tab and associate it with the appropriate hosts and service checks. I find this much more straightforward than using the Keywords field on the actual host and service check configuration pages.

Viewport

Let’s look at each of the three cornerstone uses I mentioned above, in turn. First is the Viewport. Well, that’s easy enough to enable for a keyword by toggling the radio button and assigning a sensible description (such as ”Email Message Submission Service” for “service-smtpmsa“). Which users can see which items in their own viewport is configured in the role (Advanced/Roles) associated to that user. I’d clone off one new role per user, and go to the Objects tab, remove all Host Groups or Service Groups and select only some Keywords. Job done – the user now sees those items in their viewport.

Actions

Next up is the ability for a user to acknowledge, or mark as down, an item. In fact it’s done in the same way as the viewport, that is, through a role. That’s because roles contain, on the Access tab, the VIEWPORTACCESS item for viewports and the ACTIONSOME/NOTIFYSOME items for actioning alerts. Because it’s currently only possible for a user to have one role, you cannot easily separate these rights for different keywords – a real pity. But I have no doubt multiple roles will come along, just like multiple notification profiles.

Notifications

Which brings us to the final item. Again I’d create a new notification profile for each user, so that it’s possible to opt them in or out of any service notifications. Using keywords makes things simple – are you just managing the underlying OS? Then you can have notifications about that, and not the application stack. It doesn’t stop you seeing the app stack status in your viewport, though. Because the notification profile is associated with a user, you’ll only be offered keywords that have been authorized in their role, which is a nice touch.

And finally…

In each of these steps the naming convention has really helped, because when looking at keywords the meaning “these hosts” or “this service” will (hopefully) jump out. If I were scaling this up, I’d have it all provisioned via the Opsview API from a configuration management or inventory database, and updated nightly. This is another way naming conventions help – they are friendly to automation.

Posted in devops, linux, monitoring | Comments Off

Cfengene3 on Debian Squeeze for local management

Dialling the nerd factor up to 11, I’ve decided to store configuration for my VPS server in git and manage it with Cfengine3. Joking aside, this is a sound decision: having the VCS repo makes backups simple and trustworthy, and configuration management motivates me to keep on using that repository.

On Debian Squeeze it’s a simple case of apt-get install cfengine3, with the caveat of this packaging bug meaning I hacked /etc/cfengine3 to symlink from /var/lib/cfengine3/inputs.

[Edit: A colleague of mine, David, suggests that the package should link cfengine3's masterfiles to /etc, and I'm inclined to agree.]

Anyone familiar with Cfengine2 will have a good head start on the Cfengine3 configuration, however it’s still a bit of a learning curve (but we know complex problems rarely have simple solutions). The first file read is promises.cf which can include other files (“inputs“, in any order), and lists the promise bundles and their order of execution:

body common control {
    bundlesequence  => {
            "main"
    };

    inputs => {
        "site.cf",
        "library.cf"
    };
}

The library.cf file is simply a bunch of macros or templates. For example, the built-in copy_from command is augmented with some sane defaults and named local_copy:

body copy_from local_copy(from) {
    source  => "$(from)";
    compare => "digest";
    copy_backup => false;
}

This is then used in my site.cf file to install some cron jobs:

bundle agent main {
    vars:
        "repo" string => "/path/to/git/repo";

    files:
        "/etc/cron.d"
            handle => "cron_files",
            comment => "copy crontab files to /etc/cron.d",

            copy_from => local_copy("$(repo)/etc/cron.d"),
            depth_search => recurse("inf"),
            perms => p("root","444");
}

This is a trivial example, and could be made better. For example all files in the target directory have their permissions changed (via the “p” macro), whereas it makes sense only to set those files we copy, not any already existing.

Hopefully this post shows that Cfengine3 configuration isn’t that hairy, and once the principles are installed in your head it’s a case of browsing the reference manual and building up promise libraries.

Postscript

I’d like to note that the Cfengine3 configuration mini-language could be better designed. Some statements are terminated by semicolons as in the body, above, others separated by commas but still semicolon-terminated, as in the bundle, and braced sections inconsistently semicolon-terminated. This leads to awkward syntax errors when designing new promises :-(

Furthermore, I feel the language would benefit from some noise keywords, for example:

body copy_from local_copy(from) {

versus

body copy_from as local_copy(from) {

The latter makes it slightly more clear which is the base primitive and which the new macro name. I’m a great fan of the use of syntactic sugar, in moderation, and intuitive configuration mini-languages.

Posted in devops, linux | 2 Comments

Backing up Time Machine Backups from a ReadyNAS Duo

I’ve had a ReadyNAS Duo probably for about a year, now, and quite honestly can’t fault the little black box. Managing computer systems during the day, I have little interest in doing the same at home. The Duo provides home media sharing (DLNA and iTunes), a shared printer, Time Machine backup service, network storage for MythTV, UPS integration, and pretty good configuration and automation.

However as we know RAID is not a backup solution, so I still have to get the data onto some other media and preferably out of the house. For this we have a Western Digital Elements external USB hard drive, to which the Duo copies data when connected.

For Time Machine backups it’s not obvious how to get copies of the sparsebundles. I found that they’re stored in the Duo’s c: volume, in a folder called .timemachine. This can either be backed up with the whole of c: in one job, or separately if referred to by itself.

Posted in OS X | Comments Off