Fixed-width text and WordPress on iPhone/iPad

A quick note to cheer that I fixed an annoyance with my blog rendering on the iPhone/iPad platforms.

Text in code blocks or preformatted (i.e. fixed width) came out huge on the iPhone and iPad making things look ridiculous. It seems the WordPress Twenty-Ten theme (or derivatives of it) explicitly super-size fixed width text on these platforms.

The fix is to edit style.css in whatever theme you’re using and comment out the styling as below:

/* =Mobile Safari ( iPad, iPhone and iPod Touch )
-------------------------------------------------------------- */
/*
pre {
    webkit-text-size-adjust: 140%;
}
code {
    webkit-text-size-adjust: 160%;
}
*/

I suspect there is good intention in the Twenty-Ten theme’s style, but as I also have the SyntaxHighlighter plugin installed, things conflict and go awry.

This solution was gratefully lifted from the WordPress Answers web site.

Posted in blogging | Comments Off

Virtual Machine on Mythbuntu

I have a Linux box running the excellent Mythbuntu (Ubuntu-based) distribution, headless (that is, without a monitor). Quite a lot of the time it’s sat around doing nothing (and even during recording or playback the CPU is idle).

For some side-projects I wanted a clean Linux installation to mess about with. It seemed a good idea to run virtual machines and make the most of existing hardware; what surprised me was just how easy this turned out to be :-)

The Ubuntu documentation for KVM is excellent, I must say, but I fancied distilling things further and blogging here, as I typically do to record most of my technical adventures. I’m not going to bother with any of the GUI VM builder tools or even the Q&A install script, but simply specify the VM config fully, up front.

Optionally, check whether your CPU has virtualization extensions – any fairly recent desktop chip should do. On Ubuntu there’s a command called kvm-ok, or you can poke /proc/cpuinfo:

# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

# egrep -q '(vmx|svm)' /proc/cpuinfo && echo 'good to go!'
good to go!

First up install the KVM software:

# apt-get install qemu-kvm virtinst

This will pull in all the necessary packages. On other platforms it should be similar, but the virtinst package is often renamed (e.g. virt-install or vm-install).

Before getting stuck in to KVM we need to reconfigure the system’s network adapter to be a bridge. I prefer to set a static IP for servers on my home LAN and use the /etc/network/interfaces file for configuration:

# cat > /etc/network/interfaces
auto lo eth0 br0
iface lo inet loopback
iface eth0 inet manual
iface br0 inet static
    address <IP-ADDRESS>
    network <NETWORK-ADDRESS>
    netmask <NETMASK>
    broadcast <BROADCAST>
    gateway <GATEWAY>
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0
    post-up ip link set br0 address <MAC-ADDRESS>

(hit ctrl-D)

Obviously, fill in the blanks for your own system’s IP and MAC address details. Next we can blow away Ubuntu’s network mangler daemon and poke the KVM service into life:

# apt-get --purge remove network-manager
# /etc/init.d/networking restart
# service libvirt-bin start

Now find somewhere on your disk for the VMs and a little script to live, and create a directory. I named mine /opt/vm. In there, try starting with this little shell script:

#!/bin/bash
virt-install --name=sandbox --ram=512 --vcpus=2 --os-type=linux \
  --autostart --disk=path=/opt/vm/sandbox.img,size=50 \
  --graphics=vnc,listen=0.0.0.0,port=5900 --noautoconsole \
  --cdrom=/opt/vm/mythbuntu-11.10-desktop-i386.iso

Walking through the above, it should be clear we’re creating a new VM called sandbox (this is the name KVM knows it by, not a hostname), with 512MB RAM, two virtual CPUs, a Linux-friendly boot environment, and 50GB (sparse) disk. The VM will be automatically booted by the KVM service when its host system boots. The last line specifies an installation CD image from which the new VM will boot.

For the graphics configuration I’ve asked for a headless system with the console being offered up via a VNC port on the host server. Note that the listen=0.0.0.0 is essential to connect remotely (e.g. over your home LAN) to the console because otherwise the VNC port is simply bound to the loopback interface.

Running the above will bring the new VM into life:

# ./sandbox.sh

Starting install...
Creating storage file sandbox.img                      |  50 GB     00:00
Creating domain...                                     |    0 B     00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.

What KVM means by “installation still in progress” is that it knows this system is installing from the boot CD, so you should go right ahead and fire up VNC and connect to the console (port 5900 on the host server) to complete the process.

You’ll find that KVM saved the sandbox VM configuration in XML format in the /etc/libvirt/qemu directory, so that’s where to go to tweak the settings. Good documentation is available at the KVM website.

Be aware, however, that because KVM assumed the attached CD ISO was only needed for initial install, it’s not featured in the saved config as a permanent connection. You can, of course, remedy this (check out the virt-install man page for starters).

To finish off, here’s how to manage the lifecycle (start, restart, blow away, etc) of the VM. Use the virsh utility which can either be run with a single instruction or with no parameters for an interactive CLI:

# virsh
Welcome to virsh, the virtualization interactive terminal.
virsh # list
 Id Name                 State
----------------------------------
 10 sandbox              running

virsh # destroy
error: command 'destroy' requires <domain> option
virsh # destroy sandbox
Domain sandbox destroyed

virsh # create sandbox
error: Failed to open file 'sandbox': No such file or directory

virsh # create sandbox.xml
Domain sandbox created from sandbox.xml

virsh # list
 Id Name                 State
----------------------------------
 11 sandbox              running

Try the help command, and note that the VM’s XML settings file may need updating if you change things (see dumpxml).

I hope this is a useful and quick tutorial for KVM on Ubuntu… Good Luck!

Posted in devops, htpc, linux, networking, virtualization | Comments Off

Hacking # in OS X

To get a # sign on an Apple keyboard you use the Option (or Alt) key + 3. This seems terribly klunky to me, and # is of course used quite a bit in programming and sysadmin work.

This hack remaps another key on the keyboard to produce the # character. I chose the funny squiggle that’s to the left of the number 1 key (§). This is the Section sign, used in document formatting. Just create a file at ~/Library/KeyBindings/DefaultKeyBinding.dict which contains the following:

{
    /* this will make all &#167; turn into # */
    "\UA7" = ("insertText:", "#");
}

Any app that uses Apple’s Cocoa interface widgets for text input will pick this up after being restarted. There are some that don’t (perhaps TextMate? Not checked that one so if you know, please comment).

A lot more information about this is available at this excellent page on the Cocoa Text System, including some other neat hacks. Enjoy!

Posted in linux, OS X, perl, productivity | Comments Off

A (very) short list of Dist::Zilla tips

For App::fooapp type distributions then you might want the README etc generated from a specific file. Add this to dist.ini:

main_module = bin/fooapp
; btw, semicolon leads a comment, in case you forgot how to do that

Any Module::Install converts reading this should note the bin directory, not script.

In bin/fooapp itself you also provide an additional metadata hint (next to ABSTRACT):

# PODNAME: fooapp

Which results in nifty Metacpan links such as:

Finally this hint to the POD munger will allow a section to be pinned in place above the SYNOPSIS:

=begin :prelude

# POD here...

=end :prelude

I hope these tips are helpful to some…

Posted in perl | Comments Off

MythTV Transcoding (5): HTML5 Playback in MythWeb

I’m doing a series of posts here on automated transcoding of recordings in MythTV. The idea is to explain the basics, then evolve the design to end up with recordings suitable for playback on the iPad, which is a little more advanced.

  • The complete series of articles is available at this page

Last time we went through the steps to transcode a recording into the H.264 format, required for playback on iPad (or other Apple devices – iPod, iPhone, Mac OS X, etc).

Today we’ll complete the series by embedding an HTML5 Video player into MythWeb which will stream these files. Of course, this means playback isn’t only possible on iPad, but also any compatible browser platform. The original plan was simply to provide a file download link in MythWeb, but my friend Colin rightly suggested an embedded HTML5 Video player would be much more awesome.

Your MythWeb files are probably installed somewhere like /usr/share/mythtv/mythweb. Open up modules/tv/tmpl/default/detail.php and replace the default embedded player with a new HTML5 Video tag. You should find this around line 797. Replace the two lines comprising the Direct Download hyperlink with the following:

<video controls preload="none" width="360" height="153">
  <source
    src="/h264xcode/<?php echo $program->chanid ?>_<?php echo date( 'YmdHis', $program->recstartts ) ?>.m4v">
  Your browser does not support the video tag.
</video>

The above requires that you make the H.264 files available through your MythWeb web server. In my case, I simply created a symlink from the server’s DocumentRoot to where the H.264 files live (so, change this to reflect your own H.264 file location):

ln -s /mnt/mythtv-video/Videos/iPad /var/www/h264xcode

The result is the following, once the transcoded file appears on disk:

And we’re done! Browsing to the MythWeb site on an iPad shows the embedded player which, when clicked, opens fullscreen for native playback. My deep thanks to all those working on the software used in this series of articles, and I also hope you found it a useful read.

Posted in htpc, mythtv, transcoding | 5 Comments

Painless MythTV Channel Configuration

MythTV – a brilliant homebrew digital video recorder system. Killer features include being able to play content over the LAN at home, scheduling recordings via the web, and generally poke it to integrate with all kinds of devices (e.g. see my previous posts on H.264 transcoding). Even better, Mythbuntu makes installation a doddle.

However the most hated part for me is configuring TV sources and channels – digital terrestrial via an aerial, and digital via satellite. MythTV’s built-in scanner works at best intermittently (for me), and when it does, comes up with 1,000 shopping and adult channels which drown out the 20 or so I’m really interested in.

Then there’s TV listings. All credit to the folks working on XMLTV and the Radio Times listings grabber – that’s some impressive work. But stitching it into MythTV usually ends up with hand-editing the database to insert XMLTV IDs. User friendly? I think not.

Partly this is because these tools are used internationally and nothing is standardised between countries. Even in the UK there are three ways to get TV listings (EIT over the air, Bleb, and Radio Times).

Finally I snapped, and wrote a Perl program to do all this work. It feels so nice now to have a simple, lightweight, repeatable process to configure sources and channels. That’s what good automation is all about.

The code will only work in the UK, but might be a starting point for those elsewhere. It configures XMLTV IDs, but that doesn’t mean you have to use the Radio Times grabber. You still have to go through MythTV’s setup program to tell it about tuner cards (before running the import program) but that’s not hard work.

The code and instructions are hosted on GitHub. Let me know if you use it, and how you get on. Don’t forget to back up your database (using MythTV’s mythconverg_* scripts) before starting!

Posted in devops, htpc, mythtv, perl | Comments Off

Hosting the AutoCRUD Demo

In my previous entry here (syndicated from blogs.perl.org), I linked at the end to a demo Catalyst::Plugin::AutoCRUD application running on DotCloud. I’m much happier with this than running something on my own personal server, and here’s the notes on its setup.

For those unfamiliar, DotCloud is a Platform as a Service (PaaS) offering a freemium model. I’m grateful to them for this as the free account provides all I need for my demo.

First, I followed to the letter Phillip Smith’s comprehensive guide on deploying a Perl Catalyst application to the DotCloud service. Next I customised the basic application created in the guide to use AutoCRUD:

  • removed the Root controller
  • added two Models and their supporting DBIx::Class Result classes
  • set basepath in the configuration
  • installed an hourly cron job to:
    • restore the SQLite databases
    • restart the web service (supervisorctl restart uwsgi)

Next I wanted a more tidy looking domain for the demo, so purchased autocrud.pl through NETIM. My plan is to have demo.autocrud.pl pointing to the DotCloud instance, and sometime in the future to have autocrud.pl be used for a secret feature I’m still working on. Sadly NETIM only offers HTTP redirects from subdomains, so I delegated hosting of the DNS to ClouDNS.

ClouDNS is another freemium service, again where the free part provides just what I need. They offer not only a bit of a smarter interface than NETIM for DNS zone management, but also HTTP redirects from the zone apex.

I do of course know that nothing lasts forever, particularly with freemium services, and I’m grateful for what’s available because it works very well (I’ve added promotional icons for ClouDNS and DotCloud to the demo site).

The end result of this is that I now have the AutoCRUD demo safely hosted on DotCloud with a friendly URL to pass out in documentation or blog posts :-)

Posted in databases, devops, git, perl | Comments Off

AutoCRUD revamped

For a couple of years I’ve been planning to rip apart and put back together the guts of Catalyst::Plugin::AutoCRUD, to address limitations in the initial implementation. After changing job and moving house I’m pleased this finally came to the top of my hacking stack.

Nothing was going to happen however before I could work out how to do one thing: achieve independence from DBIx::Class as a “storage engine”. I love DBIx::Class, but it would be much more cool to support any data storage system able to represent a table+column paradigm (even things like CSV, as a test case).

So SQL::Translator hit me like a thunderbolt. Of course, that’s exactly what it does - introspect some data storage and provide a neutral, class-based representation of the tables and columns (fields). It’s a little rough around the edges, but certainly good enough. The Translator provides a metadata structure which AutoCRUD’s web front-end can use, independent of any particular storage engine such as DBIx::Class. This also paves the way for development of display engines other than the bundled ExtJS and simple HTML offerings.

Right now there’s a developer release of AutoCRUD on CPAN, and I hope shortly to have a production release. Whilst the web side might not look much different, the fact is that it can now support significant features such as tables with composite/compound primary keys, or no primary keys for that matter, database views, relations to self, multiple relations to the same table, and so on.

Alongside that, I’ve taken the opportunity to fix a few quirks of the web interface, and chomp my way through the outstanding wishlist. The updated code is now running on a DotCloud instance, so please go and have a play!

p.s. a cron job will restore the demo’s databases at the top of every hour

Posted in perl | Comments Off

Releasing trial/dev/beta versions with Dist::Zilla

You might have stumbled across Dist::Zilla's --trial command line option in the past, and maybe even used it for a developer CPAN release. Its effect is (as I understand it) two-fold:

  • adds -TRIAL to the name of the distribution archive being produced
  • sets release: testing in the META.json file which is parsed by CPAN services

It came to my attention that using -TRIAL is actually pretty bad for you and your system, and other users, even though it's one of the two naming conventions CPAN services use to identify developer releases.

The problem is that the actual $VERSION of your code is unaffected. This means once installed, you can't ask your computer the version of an installed distribution and work out from that whether it's a developer release, or not. A secondary issue is that in sites such as metacpan.org there's nothing really obvious about the release which highlights its status as "development", in the list of available versions.

An alternative way to signal to CPAN services that a dist is a trial release is to use an underscore and a secondary version number at the end of $VERSION, like _001. This is still a bit crappy but at least humans can really easily see what's going on.

Back to Dist::Zilla. If you use the AutoVersion plugin, a better alternative than using --trial is to set the DEV environment variable when you build or release the distribution. This has the effect of:

  • (sprintf '_%03u', $ENV{DEV}) being added to the end of $VERSION
  • sets release: testing in the META.json file which is parsed by CPAN services

Otherwise the best thing to do right now is to set the version manually, for developer releases. I hear from chatter on IRC that there are plans to change the --trial feature of Dist::Zilla to alter $VERSION if necessary (that is, if no underscore exists) - a good compromise, I reckon.

Posted in perl | Comments Off

MythTV Transcoding (4): Add an H.264 Transcode Job

I’m doing a series of posts here on automated transcoding of recordings in MythTV. The idea is to explain the basics, then evolve the design to end up with recordings suitable for playback on the iPad, which is a little more advanced.

  • The complete series of articles is available at this page

Last time we went through the steps to transcode a recording into the H.264 format, required for playback on iPad (or other Apple devices – iPod, iPhone, Mac OS X, etc).

Today we’ll complete the series by integrating this with MythWeb so that a new button appears which sets off the transcode job, and then go on to make the output available for streaming over the web. Remember that there are other ways to watch recordings on the iPad, as listed in the previous article in this series.

First we need to let MythTV know about the new user job command (/usr/local/bin/h264xcode). You can do this either in the mythtv-setup program, or via the MythWeb interface. Personally I use the latter, as stopping the MythTV Backend and running the setup program over an SSH tunnel is fiddly, but I’ll cover both here for completeness.

In mythtv-setup you want to select General and then hit Next until you reach the Job Queue (Job Commands) page:

You can see I’ve filled in the description and command for User Job #1. The command uses parameter variables which are filled in by MythTV (a list of the variables is on the User Jobs MythTV wiki page). Here’s the command in full, as it’s not visible in this screenshot:

/usr/local/bin/h264xcode %CHANID% %STARTTIME%

Alternatively, configure these settings in the MythWeb interface. Click on the big spanner (for Settings) and choose MythTV from the list of buttons on the left. Scroll down and you’ll find a field called UserJob1 and another called UserJobDesc1 just a little below. These take the same values as in the mythtv-setup interface:

Don’t worry about the dummy values in other User Job description fields – unless the Job has a command set, it’s ignored. If you did use MythWeb, you must now restart your MythTV Backend process to get these updated settings.

Hop over to MythWeb, and you should now see a new “H.264 Encode” button for each recording, which fires off a transcode Job that’s also visible in the MythWeb Job Queue:

If you have the disk space, and want this to happen for all recordings automatically, check out the instructions from part 2 of this series. You should tick the Run user job box in MythFrontend settings, and also note that this only takes effect for new Recording Schedules, although I explain a fix for existing schedules at the end of the article. Don’t forget to clean up old transcode files after a while!

The final step on my road to iPad streaming is to embed this video into MythWeb, which I’ll cover in the next article.

Posted in mythtv, transcoding | Comments Off