pkgdepend improvements

I should have written about this a few days ago, but better late than never.

With the putback of:

changeset:   2236:7b074b5316ec
user:        Tim Foster 
date:        Tue Feb 22 10:00:49 2011 +1300
        16015 pkgdepend needs python runpath hints
        16020 pkgdepend doesn't find native modules
        17477 typo in pkgdepend man page
        17596 python search path not generated correctly for xml.dom.minidom
        17615 pkgdepend generate needs an exclusion mechanism
        17619 pkgdepend generate is broken for 64-bit binaries when passing relative run paths

pkgdepend(1) has become better at being able to determine dependencies. I’d done some work on pkgdepend before, and it was nice to visit the code again.

To those unfamiliar with the tool, I thought I’d write an introduction to it (which I should have written last time).

pkgdepend in a nutshell

pkgdepend is used before publishing an IPS package to discover what other packages are needed in order for the contents of that package to function properly. The packaging system then uses those dependencies whenever a package is installed to automatically install those dependencies for you.

During the creation of a package, the process of running pkgdepend on your manifests is broken into two phases, each with its own subcommand.

pkgdepend generate

The first is called ‘generate’. This is where the code examines each of the files you’re intending to publish in your package. Depending on the type of file it is, we look for clues in that file to see what other files it may depend on.

Those clues could be as simple as the path that comes after the ‘#!’ in UNIX scripts (so for a Perl script with ‘#!/usr/bin/perl’ at the top of it, obviously you need to have Perl installed in order to run the script) or could be complex, such as digging around in the ELF headers in an ELF binary to find the “NEEDED” libraries, determining Python module imports in a Python script, or looking at ‘require_all’ SMF services in an SMF manifest.

There’s a list of all the things used so far to determine dependencies in the pkgdepend(1) man page.

Once pkgdepend has gathered the set of files it thinks should be dependencies for the files you’re delivering, it outputs another copy of your manifest, this time with partially complete ‘depend’ actions.

I say partially complete, because all we know at this stage, is that your package will need a bunch of files in order for it to function properly: we don’t yet know what delivers those files. That’s where the second phase of pkgdepend comes in: dependency resolution.

pkgdepend resolve

During dependency resolution, via the ‘pkgdepend resolve‘ subcommand, we take that partially complete list of depend actions, and try to determine which package delivers each file the package depends on.

In order to do this, pkgdepend needs to be pointed at an image populated with all the packages that package could depend on – in most cases, the image is simply the machine you’re building the packages on, (remember, in IPS terms, every package is installed to an “image”: your running copy of Solaris is itself an image) though you could choose to point ‘pkgdepend resolve‘ to an alternate boot environment containing a different image.

Assuming we’re successful, you are then presented a version of your package with all dependencies converted from just the filenames needed to satisfy each dependency, to the actual packages IPS will install for you in order for your package to function.

Things that can go wrong

I say “assuming we’re successful” because, unfortunately, sometimes we’re not.

There are several things that can go wrong:

  • an ELF header entry could be incorrectly specified at build time, or could contain $VARIABLES that pkgdepend doesn’t know how to expand
  • a file might be delivered by multiple packages on your system, in multiple places
  • a python script might modify sys.path, a shell script might modify $LD_LIBRARY_PATH, etc.
  • we could deliver scripts only meant to be read, not run (demo scripts, for example) which could cause either fake dependencies, or dependencies which could never resolve

All of the things above can result in error messages from pkgdepend, where it’s unable to determine exactly what we should be depending on – this is the part of pkgdepend I was trying to fix in my putback.

It fixes a few bugs in pkgdepend when dealing with Python modules and kernel modules, and it introduces two new IPS attributes:

  • pkg.depend.bypass-generate
  • pkg.depend.runpath

The first, pkg.depend.bypass-generate, is used to specify regular expressions to files on which we should never generate dependencies. This gets us around the cases where multiple packages deliver files in several places, or where $VARIABLES aren’t being expanded. Bypassing dependencies this way is good, though you do need to be careful where and how you apply it — if you bypass a legitimate dependency, then there’s a good chance your package won’t function properly if the packages it depends on aren’t installed.

The second, pkg.depend.runpath, is used to change the standard set of directories that pkgdepend looks in, per-file-type in order to search for file-dependencies. This gets us around the case where programs are installed in non-standard locations.

What’s next?

Alongside this work, I’ve been doing work on the ON package manifests to greatly reduce the numbers of pkgdepend errors being reported during the ON build. (sadly, I can’t share the work on the ON manifests, but they will go back once snv_160 is available internally. If you’re an ON engineer there’ll be a Flag Day attached to this, making snv_160 the minimum build on which you can build the gate) Quite soon after that, we’ll be able to enable error-reporting from the pkgdepend phase of the build, and that will be fabulous.

I’d strongly encourage those working on Illumos and other derivatives of the OpenSolaris codebase to investigate the new pkgdepend functionality, and put in the time to get their gate pkgdepend-clean too.

Why? Well, in my view, one of the problems with SVR4 packaging was that it lacked any sort of automatic dependency analysis. This meant that packages declared manual, often-bogus dependencies on other packages – and dependencies that aren’t correct make minimisation of systems very difficult.

When we determine dependencies automatically, minimisation becomes a lot easier.

Crucially, so does package refactoring: if we split or merge packages, so long as those new packages are installed on the image being used to resolve dependencies, the packages that have dependencies on those split/merged packages automatically pick up the new package names the next time they’re published.

However, without actually checking the exit status from the pkgdepend phase of the build, you’re having to insert more manual dependency actions than should be strictly necessary, and that’s a bad thing.

Of course, sometimes we can’t avoid inserting manual dependencies – pkgdepend isn’t finished yet, and there’s more we could be doing to determine dependencies at package publication time, however the tool does make life a lot easier. So, if you’re ever tempted to insert a manual dependency into your package, please do think carefully about it, and please add a comment to the manifest explaining in detail why that manual dependency is really required.

Looking back on 2010: what I did

2010 was a busy year for myself and the family. Here are some of the things I got up to:

  • Moved job

    I joined the IPS team in January

  • Moved company

    Sun Ireland disappeared in April, and I became an Oracle employee. Personally, that’s working out very well and I’m not noticing much difference in terms of my day to day work. Yes, the company has a different culture – and if there were a “Top Gear Cool Wall” for IT companies, in my opinion, Oracle would likely be “Seriously uncool“, with Sun as “sub-zero“. I think the Solaris engineering culture has survived the transition so far, at least at my level. We’ll see what the New Year brings.

  • Moved country

    I moved from Ireland to NZ in May, having gone through the long process of applying for residency here. That this coincided with the economic conditions in Ireland was purely chance. Our decision to move was not related to work or the economy, it was purely based on a willingness to stretch our horizons and see some more of the world. So far, NZ is treating us well – ups and downs.

  • Ran another marathon


  • Was mentioned in a piece by the Irish Times

    (albeit with the errors that they used my wife’s maiden name, and mis-credited my photo to Alan Betson, though I’m happy to have my photographs confused with his, I’m really not in the same league :-) )

  • Did a total of 24 putbacks to the pkg-gate.

    This covered 66 bugs or RFEs – some were minor and didn’t take a lot of work to get right, like 12723, some were major like 13536 and were projects in themselves. All were a lot of fun to work on though, and I feel privileged to continue to be employed on something I enjoy.

Looking forward to 2011, I’m hoping to explore more of NZ. If you follow me on Twitter, subscribe to my Flickr feed or look at some of the stuff I’ve put up on Smugmug, you’ll see some of the photos I’ve taken this year: this country really is incredibly beautiful – indeed, as is Ireland. You should visit both places, then pick which one you want to live in.

There’s lots to do next year – we’re moving house in a few weeks time, and I get to set up a new home office, in a separate building to the house – so I’m hoping my struggle for a healthier work/life balance will start tipping in the right direction.

I’ll also try to keep up the running – possibly entering the Wellington marathon again to see if I can better this year’s time, though being honest, I’m not sure I’ve got a faster pace in me – I’ll aim to improve though. Having recovered from the marathon, I’m doing a lot less road-running and a lot more mountain/off-road running now so perhaps it’s time to look for a different sort of event instead?

Roll on 2011 :-)

Scripting languages as OS core components

"A Shell Script wants your job", Original image by jm3 / John Manoogian III

There’s been some crowing recently about how wonderful it is, that a scripting language is no longer a dependency for an OS build.

My opinion is that this is a shame on many levels: it’s a shame because the time spent rewriting all of this code could have been better spent elsewhere, it’s a shame because this new code presumably has integrated a heap of new bugs and it’s a shame because the bar was raised for potential contributors to their codebase: if you don’t know C, you can’t write code for them.

Most importantly though, I believe that scripting languages have a better place in /usr/bin and as helper components for core OS functionality than some folks seem to believe.

I’ve been writing in Python for a few years now, first as part of our Xen port, now on IPS, and I think that the sorts of things most OS commands do is far easier to express, code and debug in Python than it is in C.

Perhaps it’s me, but I’m much more comfortable firing up an editor and debugging a script I can see, than I would be having to download and setup a complex build environment, compile sources (if that source is even available :-/ ) and drop binaries in place.

Excising Python from an operating system is like chopping off your arm. Sure, you’ve fewer dependencies now (the original reason cited for removing the code in question) – no need for wrist watches, and you won’t get worn patches on your jumpers at the elbow, etc. but I’m not convinced it was the right move.

Still, each to their own – this is merely my opinion.

pkg history enhancements

Original image by Kristian Bjornard (bjornmeansbear)
With the putback of:

changeset:   2135:6eeb55920e13
tag:         tip
user:        Tim Foster
date:        Wed Nov 10 11:32:32 2010 +1300
	3419 history command -- ability to specify dates or date ranges desired
	17012 pkg history should include boot environment or snapshot information
	17013 pkg history subcommand should display boot environment and snapshot information
	17222 pkg history could use a -o option

the pkg ‘history’ subcommand now has some new features.

We now record the name of the boot environment the operation was applied to, any ZFS clones created, and any ZFS snapshots taken during the course of the operation.

In addition, pkg history can also accept a comma-separated list of column names to print different output. The known column names at the moment are:

The name of the boot environment this operation was started on
The name of the client
The version of the client
The command line used for this operation
The time that this operation finished
The user id that started this operation
The new boot environment created by this operation
The name of the operation
A summary of the outcome of this operation
Additional information on the outcome of this operation
The snapshot taken during this operation. This is only recorded if the snapshot was not automatically removed after successful operation completion
The time that this operation started
The total time taken to perform this operation (for operations that take less than a second, “0:00:00” will be printed)
The username that started this operation

The old “result” column has been split into “result” and “reason” to preserve field formatting, and the old “time” column has been renamed to “start”. The “time” column now contains the total operation time (“finish” – “start” times) – I figured, that calculating the total operation time might be useful, rather than expecting users to do it manually.

Finally, pkg history gets a ‘-t’ flag, allowing users to specify a comma separated list of dates, or ranges of dates they’re interested in. Previously users could only choose to see all events or the last ‘n’ events with the -n flag.

I really like the history subcommand – I’ve found that being able to see over time which packages have been installed and removed from the system, and which operations have failed or succeeded is extremely useful. Being able to find detailed information about how packages have been managed over time gives quite an insight into how people use software. It’d be interesting to use this as input on deciding how to craft custom distributions of Solaris that contain the software that people use in the real world.

We didn’t have a history function in SVR4 that I’m aware of – another point in favour of IPS. History Lives on in Historic Historyville!

Fireworks, 2010

For the past few years, every Halloween, we’ve had fireworks in Dublin. I go out, take photos, post them on my blog along with a bit of a commentary on what I got up to during the day. I generally also give a nod to Z-Day along the way (back in 2005, I was working on the ZFS test team when we integrated into Solaris, something I still feel proud to have been a part of)

This was the first year in a long time that I’ve not done that, and while my birthday was great in every other way, I really missed the fireworks. Halloween just isn’t a big thing here in NZ, and I think that’s a shame. In or around my birthday, since the day I was born, there’s always been something extra special in Ireland – fireworks, a spooky atmosphere, and some pretty wonderful parties (I like to think Tim’s Halloween Fancy Dress parties had a bit of a reputation, back in the day)

It turns out that NZ, with it’s colonial background, has more celebrations for November 5th than it does for Halloween, and Wellington seems to go all-out on the fireworks front. So, armed with my 70-200 f4 (no tripod, I enjoy the swirling pattern that you get from handheld fireworks photos) I took off up to the hills of Khandallah this evening, ditched the car wherever I could find some space, and rushed to find a vantage point. I didn’t get the best spot, but I was reasonably happy with the photos I got nonetheless.

The SmugMug album is here, and I’ve thrown in a few thumbnails below (if you want them bigger, go for the SmugMug link instead). I’m looking forward to next year’s fireworks already, wherever I’ll be at that stage.

Talking about IPS in Wellington

I gave a talk about IPS yesterday to a group of sysadmins from Oracle customers here in Wellington, New Zealand.

The talk itself was limited to 30 minutes, and was based on a much longer internal presentation my colleagues Danek & Bart gave earlier this year to the kernel engineering group at Oracle.

The slides for the talk are here.

I think I tried to squeeze a bit too much content in, barely having time to breathe during the talk. I covered the main reasons why sticking with SVR4 packaging & patches is a really bad idea (with this audience, that felt like I was preaching to the choir).

I covered the basic design assertions behind IPS, then went on to talk about actions, dependencies, variants and facets.

If given the chance to do such a talk again, with similar time constraints, I think I’d simply sit down at a command line, and walk through the various pkg(5) command line tools, talking about the details of the packaging system along the way.

As ever though, the best conversations came after the talk was finished – I talked to several customers who’ve been using Solaris 10 and Zones, have experienced patching them, and were keenly interested in getting something better. They seemed to be happy with the direction we’re going in.

I pointed them to the documentation we’ve got up on the project web page, and encouraged them to have a look at older OpenSolaris builds if they wanted to get a preview of IPS as it will appear in Solaris 11.

They asked how complex it would be to convert between SVR4 packages and IPS packages – ‘pkgsend generate‘ is a good start here. That led to a good discussion of how we’ve been converting post-install scripting in Solaris itself over to SMF services that run once on boot, allowing you to be confident that your scripts will run in the environment they were intended to run in.

All in all, I felt good about the talk, but would definitely have liked more time, if only so that I didn’t have to gloss over as much potentially interesting detail as I did.

For my troubles, I was given a rather nice Oracle Solaris coffee cup – thanks Rob!


Some pocket lint, found in my jeans when I did this putback.

With the recent putback to the IPS gate:

changeset:   2046:2522cde7adc2
tag:         tip
user:        Tim Foster 
date:        Thu Aug 26 13:11:20 2010 +1200
	13536 We need a way to audit one or more packages
	15860 publication api needs auditing phase
	15862 pkglint tool needed aid in package creation and auditing
	16828 ProgressTracker should make it easier for others to interleave output
	16875 we should be able to execute tests directly from the source
	16800 pkglint should allow signature actions in obsolete and renamed manifests

we now have pkglint(1), a tool that can check package metadata for common errors before publishing. We never really had an equivalent for SVR4 packages, although many have written scripts to do so. The pkglint man page documents how the tool works.

Out of the box, the below checks are performed on manifests, either retrieved from a repository, or passed as local files on the command line. It’s also pretty easy to extend pkglint(1) with your own checks (details in the man page) If you think there might be something missing out of this default list, do please let me know.

timf@linn[2808] pkglint -L
pkglint.action.005           pkg.lint.pkglint_action.PkgActionChecker.dep_obsolete
pkglint.action.003           pkg.lint.pkglint_action.PkgActionChecker.legacy
pkglint.action.005           pkg.lint.pkglint_action.PkgActionChecker.license
pkglint.action.001           pkg.lint.pkglint_action.PkgActionChecker.underscores
pkglint.action.004           pkg.lint.pkglint_action.PkgActionChecker.unknown
pkglint.action.002           pkg.lint.pkglint_action.PkgActionChecker.unusual_perms
pkglint.action.006           pkg.lint.pkglint_action.PkgActionChecker.valid_fmri
pkglint.dupaction.002        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_drivers
pkglint.dupaction.006        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_gids
pkglint.dupaction.005        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_groupnames
pkglint.dupaction.008        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_path_types
pkglint.dupaction.001        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_paths
pkglint.dupaction.007        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_refcount_path_attrs
pkglint.dupaction.004        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_uids
pkglint.dupaction.003        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_usernames
pkglint.manifest.005         pkg.lint.pkglint_manifest.PkgManifestChecker.duplicate_deps
pkglint.manifest.006         pkg.lint.pkglint_manifest.PkgManifestChecker.duplicate_sets
pkglint.manifest.004         pkg.lint.pkglint_manifest.PkgManifestChecker.naming
pkglint.manifest.001         pkg.lint.pkglint_manifest.PkgManifestChecker.obsoletion
pkglint.manifest.002         pkg.lint.pkglint_manifest.PkgManifestChecker.renames
pkglint.manifest.003         pkg.lint.pkglint_manifest.PkgManifestChecker.variants
opensolaris.action.001       pkg.lint.opensolaris.OpenSolarisActionChecker.username_format
opensolaris.manifest.001     pkg.lint.opensolaris.OpenSolarisManifestChecker.missing_attrs

Excluded checks:
opensolaris.manifest.002     pkg.lint.opensolaris.OpenSolarisManifestChecker.print_fmri

Over the coming weeks, I’ll be addressing some additional bugs and RFEs for pkglint. Once we’re sure it’s stable, I hope to start working with the right folks to see if we can get pkglint(1) runs performed on their gates during their builds.

Many thanks to everyone who helped code review and provide feedback – it was very much appreciated!

Setting up my IPS development environment

There’s probably a small number of people who’ll find this useful, but this is one of those scripts that I’ve had kicking around for ages that I use daily, so thought it was worth a mention here.

This sets up a developer environment for IPS, pointing $PATH and $PYTHONPATH to the right place. Just run it from anywhere beneath any of your development workspaces, and you’ll be set to run pkg from the proto area of that workspace.

timf@linn[990] echo $PATH
timf@linn[991] $(ips-env)
timf@linn[992] echo $PATH
timf@linn[993] echo $PYTHONPATH
timf@linn[994] cat ~/bin/ips-env

function find_proto {
	while [ -z "$PROTO" ] && [ "$PWD" != "/" ] ; do
		if [ ! -d "$PWD/proto/root_$(uname -p)/usr/lib/python2.6/vendor-packages/pkg" ] ; then
			cd ..
	if [ -d "$PROTO" ] ; then
		echo $PWD/proto


if [ -z "$PROTO" ] ; then
	echo "No proto found at or above current directory" > /dev/stderr
	exit 2

echo export PYTHONPATH=$PROTO/root_$(uname -p)/usr/lib/python2.6/vendor-packages
echo export PATH=$PROTO/root_$(uname -p)/usr/bin:${PATH}
echo export SRC=$PROTO/../src

2010 Harbour Capital Marathon

I ran the Wellington Harbour Coast Marathon yesterday, finishing 32nd overall, 24th in my category with a time of 3:12:39 – results here. I’m thrilled with the result, and thought a race write-up was in order.

Some beaten up runners, my medal and a source of encouragement

It was a very early start – with the gun going off at 07:30, I was up at about 05:40 in order to get some breakfast in and give it a bit of a chance to get digested. I was in bed early the night before, and the night before that, but didn’t sleep much on Saturday. Still, I’d watched the best pre-marathon advice video I’ve seen, here and was as prepared as I could be. The weather for the day, was a 10°C high, 7°C low, a 37kph southerly and 87% humidity, according to the race website and it felt pretty bleak as the missus gave me a lift to starting area.

We made it to the Westpac Stadium for about 06:40, checked in my bag of clothes for after the race, donned my bin liner and started some gentle stretches & warm-ups.

After the pre-race briefing, we headed out into the pouring rain to the stadium concourse, and lined up at the start. It was a much smaller field than Dublin last year, with 387 people taking part. The half marathon and 10k races, held on the same course that day, had 1,600 and 1,162 finishers respectively and the walks added a few more numbers to those taking part: about 4,000 people in total.

The course itself was an out-and-back route, around the Wellington harbour and bays as far as Breaker Bay and back again. It was cold, dark and windy to begin with, and while the rain eased off a little after a few kilometers, the wind was pretty constant. Thankfully, this was a blessing & a curse, as there was a good chance the wind you fought in one direction would give you a gentle push on the return journey. I drafted as much as I could, and was happy to reciprocate.

My aim for the run was to beat my previous marathon time of 3:30, with a stretch goal of running a 3:15 – during training, I was racking up long runs at 4m 30s/km so felt I might be able to pull it off.

I ran the first 10k in 47:12 a little slower than my ideal pace, but had reached halfway point at 1h37m – bang on target. Then, I picked up the pace a little, with a bit of a mental boost coming from running back past the rest of the field who were still on the first half.

At the end of Shelley Bay Road, we hit the half-marathon turn-around point (the half-marathon run was also an out-and-back route), and a lot of human traffic as a result – the road had been ours so far, and we now had to share it with a much bigger field of runners. Going into the race, I’d thought that the half marathon entrants would be running at a much faster pace (and indeed, the winner of that race did it in 1:08:02 so some of them definitely were!) but further back in the field, they were running slower than I was and I overtook a lot of runners with green race numbers (marathon runners had black race numbers). Again I think that might have given me another subconscious boost, but the best boost of all was seeing the missus and the kids at that point, cheering me on!

During the course of the run, I had exchanged a few words with a fellow runner, who it turned out, was running his 13th marathon, and was running at my pace more or less. So, at this point I decided I’d keep him in view for the remainder of the race, and we overtook each other at a few points during the run back, with general words of encouragement, egging each other on.

At about 38k I was getting pretty tired, but hung on for another few kilometers and then gave it everything I had over the final 2k, really picking up the pace and finally sprinting for the line over the last few hundred meters. I was elated to see a sub-3:15 on the clock as I ran over the line, and have a feeling that most of Wellington heard me whooping with delight! The guy I was chasing probably had a good 30 seconds on me at that point, but he was nice enough to wait around for me at the finish line and exchange congratulations.

As for my splits over the course of the race, I’m not entirely sure: I had been trying to record these every 2 kilometers, but with the drinks station positioning and my missing the occasional marker, what I ended up with is below. They seem a bit suspicious to me in places, but they’re reasonably consistent I think.

time kilometer
9:25 1, 2
8:52 3, 4
8:49 5, 6
21:06 7, 8, 9, 10
5:48 11 (this seems too slow)
8:40 12, 13
17:52 14, 15, 16, 17
16:18 18, 19, 20 (also seems too slow)
10:14 21, 22
8:53 23, 24
8:45 25, 26
9:00 27, 28
9:17 29, 30
9:27 31, 32
17:56 33, 34, 35, 36
9:13 37, 38
8:36 39, 40
12:25 41, 42 (I stopped my watch as I crossed the line, but
found that while stretching afterwards, I had started the stopwatch again,
so this last split is definitely wrong

This was my 2nd marathon, but it went a lot better than the last time. I was following exactly the same training plan from Hal Higdon’s website, with the only major gaps being the two weeks at the beginning when I had a cold, and the week or so in the middle when we were moving the family down to New Zealand.

I kept a physical training log this time around, pinned to the fridge so I’d always see it, as well as a version on Google Wave for a bit of self-induced peer-pressure. The yellow entries were runs I completed and orange being ones I missed, and my aim was to make the page turn from white to yellow with as few orange bits as possible (inspired by unit-testing – I think I should have picked green & red highlighters :-) Towards the end of the program, I wasn’t too worried about missing runs during the taper.

Tim's Training Log

I did a few things differently this time around, and I think they helped me a great deal during the race:

  • The Ngaio Gorge and its hill! When we moved to Wellington, my running routes changed to include a lot of hills (Wellington isn’t a flat city) and despite my complaining about them and the really crappy weather we’ve been having here, in retrospect, I think the hill running during the 2nd half of my training was the key thing that made me faster and tougher over the distance.
  • I tried as much as possible to stick to a constant pace – not to panic when things were a little slow at the start of the race, but to just keep an eye on my time credit/debit at the kilometer markers and make gentle adjustments to my pace.
  • I ate jelly babies fairly constantly during the race, as opposed to just when I felt I needed them (or deserved them!) Last time around I alternated between fig rolls and jelly babies, but I found the jelly babies on their own much easier to eat on the run.
  • I only drank water from the drinks stations during the race, and avoided the Powerade that was supplied, with the only exception being the 250ml of fresh orange juice (with a fair bit of salt added) that I was carrying, which I drank at about 37k. If I do another marathon, I might try to find out what energy drink the race provides, and add that to my training to see if it makes a difference.
  • I ran farther during training: I know this is up for debate, but felt that during the last marathon, the 20 mile (32k) long runs I was doing didn’t prepare me for those final 10k. This time, my longest training run was 37k, with an additional 4.1k of a gentle jog/walk back up the Ngaio Gorge – practically a marathon during training, and while I only did that once, I do think it helped.

Today, I feel a lot better than I remember the day after my first marathon – stairs today aren’t as scary as they were last October, and my John Wayne impression isn’t nearly as good as it was last year. That said, there’s an element of this in the way I’m moving about today (except the bit with the nipples, elastoplast FTW :-).

Will I do another marathon? Yes, probably – though I somehow doubt I’ll be able to make as much of an improvement in my PB next time around, we’ll see how things go. For now, recovery beckons.

pkg(5) SMF manifest dependency support

I’ve finally pushed some code that I’ve ended up working on in two different hemispheres!

With the putback of:

changeset:   1933:5193ac03ad9f
tag:         tip
user:        Tim Foster 
date:        Tue Jun 08 12:17:06 2010 +1200
	15305 need to generate dependency information from SMF manifests
	15306 need an attribute to declare the SMF services a package delivers
	15722 pkgdepend doesn't always remove internal deps when variants taken into account

pkgdepend(1) will now use SMF manifests as a source for dependency information.

This means, that if you as a package author are including SMF manifests in your package, the package publishing system will look in those manifests for any service descriptions that declare other FMRIs as “require_all” dependencies for your service, and will then ensure that the packages delivering those services are marked as dependencies for the package you’re publishing.

Obviously to do this work, pkgdepend needs to be run on a build machine that includes all SMF FMRIs that your package needs to function. That said, having your publishing system figure out dependencies for you, means there’s one less thing for you as a publisher to worry about, and certainly will make life easier for your users.

An interesting side-effect of this work, is that now you’ll be able to search the package database for SMF manifest FMRIs. For example, if you’re wondering who delivers svc:/network/iptun:default, you can do a simple search for it –

$ svcs iptun
STATE          STIME    FMRI                                
online         May_27   svc:/network/iptun:default
$ pkg search -l ':::svc\:/network/iptun\:default'
INDEX                ACTION VALUE                      PACKAGE
opensolaris.smf.fmri set    svc:/network/iptun:default pkg:/SUNWcs@0.5.11-0.142

(though do remember to escape the colons in the SMF FMRI field. Wildcards also work, so for kicks, you might want to try pkg search -l ':::svc\:* :-)


all change

Sunset, New Zealand

We’ve just gone through LEC (legal entity combination) and Sun Ireland is no more.

I’m not sure how I feel about Oracle yet, but I’m going to stay positive and give them a chance to show me that it’s a great place to work. Certainly getting paid to work on Solaris continues to be wonderful but I always felt there was more to working at Sun than just the code. I think we’ve lost something special with the demise of Sun, but I’m looking forward to the next few months to see how things turn out.

Now Oracle, it’s your turn.

My home mini-NAS


I’ve been getting increasingly edgy about the backup strategy we use at home.

My work backups are a lot more comprehensive: auto-snapshots, sending/receiving to a ZFS pool on SWAN, with hg clones of important workspaces stored on an NFS-backed home directory with its own separate backups performed by Sun IT.

At home though, we were storing most of our photos on my aging 2002 17″ iMac. There, when I remembered, I’d kick off a manual rsync of the contents of the mac to a 3.5″ 160gb ide disk containing a single ZFS pool attached to an OpenSolaris laptop via a USB enclosure.

– you can see the two approaches differ pretty significantly.

Added to this, the missus got a 1080p video camera for Christmas, and I figured a little extra storage would be handy. So, I decided it was time to get another computer at home that I could use as a small NAS box. I also figured that if this box was going to be left on a lot of the time, it ought to be power efficient. Along with that, wouldn’t it be good if it was capable of doing tasks other than just storing data?

I wanted at least a mirrored ZFS pool for the data, and a separate disk to run the OS from. Looking around a lot of the major consumer computer vendors, none that I could find were selling small, power efficient computers that could fit 3 disks. If any consumer-oriented computer vendors are out there, I’m sure there’s a market to be tapped here?

The best I could come up with, was a single-disk computer attached to a separate consumer NAS device. The trouble is, that NAS likely wouldn’t be running ZFS, and that was a non-starter for me.

So, I embarked on building my own. I’d seen a few good posts about building small NAS systems around an Atom processor and a mini-itx motherboard and decided to give it a go.

Here’s the parts list I finally came up with:

I went for an Atom board with an ION chipset, thinking that despite the newer D510 chips using slightly less power, they weren’t much faster than the dual-core Atom 330 and having Nvidia graphics meant I could use the box as a desktop as well as providing a stable storage platform. I didn’t really investigate AMD-based mini-itx boards: some of their chips look pretty low-power, and ECC ram would have been nice. Maybe next time.

I’d read some good reviews of the Chenbro 4-disk case, but cost was a factor here: the case I eventually went with was a lot cheaper: two hot-swap SATA disks and space for one internal disk was enough for me. I’ve read suggestions that the case can actually fit another disk if you’re willing to hack about a bit, and I could potentially also ditch the DVD drive and bolt on another 3.5″ disk if I needed more space. For now, 3 disks is enough.

I planned to use a ZFS mirror on the two hot-swap disks, and leave the OS on the internal disk. Yes, a terrabyte disk is a lot for an OS, but in my experience, you can never have enough scratch space.

I’d not built a PC in a long time, but this was pretty straightforward – my only quandry is whether I really need to connect the two fans at the back of the case: the motherboard doesn’t seem to get that hot during use, but for now, they’re staying connected, just in case. They’re not that loud.

Installing OpenSolaris nv_131 went without a hitch: I just needed to make sure the SATA disks were set to AHCI mode in the bios. I found and filed 6920337 pretty early on, and was thankful to get a fixed driver within 24h of my filing the original bug: much appreciated Rachel!

Otherwise, all is working well – the system has enough poke to run day-to-day desktop tasks: which for me, is several terminal windows, a bunch of browser windows, pidgin, Evolution and Netbeans. I’ve also tried fullscreen mp4 playback with totem and the Fluendo gstreamer plugins, and it can manage them just fine.

I’ve yet to plug the system into a power meter to see how efficient it is – I’ll add a comment to this post as soon as I find out.

Photos below, for those so inclined…


Well, that’s one bit of my life over. I worked at Sun from ’96, fresh out of college up to today, 28th January 2010. I’m a bit irked that I didn’t make it to the 15-year milestone – not to worry.

Thus far, I’m still working at same job, and I suppose technically, I’m still a Sun employee helping to write the OpenSolaris packaging system. I still have plans to move to New Zealand this year, all being well.

The date that JAVA dropped off the stock market was the same date that our passports arrived back from the NZ immigration dept. with residency visas attached to them, so that’s got to be a sign that it’s time to move on.

I had a wonderful time at Sun, and am hoping Oracle treats the company, it’s former employees and (most importantly to me) the Sun culture with the respect they deserve. Time will tell.

In the meantime, this blog, like the old one will contain the usual mix of work and non-work content, in categories so you can avoid what you don’t care about. And, lest it not already be clear, any opinions stated here are mine alone, not Oracle’s not Sun’s – I speak for nobody other than myself here.

It’s Anew day.

virtcfg/virtadm: making xVM guest administration easy

We’ve just pushed some of the code myself and John Levon had been working on over the past few months to virtcfg and virtadm. (I was hoping to get this pushed in time for Christmas, hence the photo, but alas!)

Taking their inspiration from zonecfg(1M) and zoneadm(1M), we wanted to provide command-line and interactive interfaces to creating and maintaining xVM guests on OpenSolaris that would be familiar to sysadmins already using zones.

We tried to go a little further, and include a useful templating system that allows users to easily reuse guest configurations, or components of guest configurations (disk configs, network configs, etc.)

On the back-end, they use libvirt to communicate with the xVM hypervisor, but our eventual goal with these tools was to have them work with the other forms of virtualisation available on OpenSolaris: LDOMs, VirtualBox, perhaps also Zones: we think that with our design, it would be possible to manage all of these forms of virtualisation – though for now, the code just supports xVM.

For more information on how you can download the source and install the tools on your system, see virtcfg & virtadm: making xVM guest administration easy

We don’t have firm plans to integrate this into OpenSolaris, but thought that the code’s now in a good enough state that it could be put up on for developers to look at. Do send feedback to xen-discuss on if you like the project, or feel like sending us patches or bug reports.

Here’s a quick snippet of me showing a guest configuration in virtcfg, adding a new disk, showing what guest templates we include out of the box, then booting the guest.

# virtcfg ls -r osol
cpu_capped: 2
memory: 2048M
id/displayname: osol
id/name: osol
id/uuid: 36947b42-cf0b-4bf5-affd-6d112a061812
id/osinfo/type: solaris
id/osinfo/variant: opensolaris
id/osinfo/version: [unset]
boot/args: [unset]
boot/autoboot: no
params/acpi: no
params/brand: pv
params/hvm: no
params/ioapic: no
params/rtc: localtime
devices/network/address: [unset]
devices/network/datalink/cappedbandwidth: [unset]
devices/network/datalink/mac: 00:16:3e:6a:44:76
devices/network/datalink/name: auto
devices/network/datalink/over: auto
devices/network/datalink/type: vnic
devices/network/datalink/vlan: [unset]
devices/disk/bus: xen
devices/disk/readonly: no
devices/disk/target: xvda
devices/disk/type: hd
devices/disk/volume/capacity: 16G
devices/disk/volume/format: raw
devices/disk/volume/local: [unset]
devices/disk/volume/name: space/guests/osol-disk1
devices/disk/volume/options: [unset]
devices/disk/volume/path: /dev/zvol/dsk/space/guests/osol-disk1
devices/disk/volume/type: zfs
devices/tty/index: 0
install/configuration: [unset]
install/source: file:///rpool/guests/osol-dev-129-x86.iso
install/status: installed
install/type: cd
# virtcfg osol
loading guest osol
add [ -t tmpl ] <resource>        Add a resource
clear <property> ...              Clear one or more properties
clone <new guest>                 Clone a guest
commit [-f] [-t] [-n] [name]      Save configuration or template
context [context]                 Switch context
delete <name>                     Delete a template or guest configuration
exit                              Exit virtcfg
get | info [-p] <prop> ...        Get one or more property values
help [object] | [cmd]             Show help for a resource, property or command
history [-c]                      Print command history
list | ls [-r] [-l|-p] [path]..   List resources
prompt [on|off]                   Toggle or set the extended CLI prompting
remove | rm <expr> ...            Remove one or more resources or properties
select | cd [expr]                Navigate around the tree
set <prop>=<value>                Set a property value
templates [-pH] [-o field] [name] List templates
(osol)/:> cd devices
(osol)/devices:> add -t zvol disk
(osol)/devices/disk[2]:> ls
readonly: no
type: hd
(osol)/devices/disk[2]:> cd volume
(osol)/devices/disk[2]/volume:> ls
capacity: [unset]
format: raw
local: true
name: [unset]
options: [unset]
type: zfs
(osol)/devices/disk[2]/volume:> set name=space/guests/osol-disk2
(osol)/devices/disk[2]/volume:> set capacity=10g
(osol)/devices/disk[2]/volume:> commit
virtcfg: written osol
(osol)/devices/disk[2]/volume:> exit
# virtcfg templates guest
NAME                TYPE          LOCATION
centos-5.4          guest         -
debian-lenny        guest         -
fedora-11           guest         -
linux-hvm           guest         -
linux-pv            guest         -
opensolaris         guest         -
solaris10           guest         -
ubuntu              guest         -
windows-xp          guest         -
# virtadm boot osol
virtadm: booting osol
virtadm: creating ZFS volume space/guests/osol-disk2 ... done
Connected to domain 16
Escape character is '^]'
v3.4.2-xvm-debug chgset 'Sun Dec 06 22:26:11 2009 -0800 19673:56d9e54df9d2'
SunOS Release 5.11 Version snv_129 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

As a footnote, I’m actually moving on to other work in OpenSolaris – pkg(5): the Image Packaging System, so won’t be able to spend much time maintaining the part of the code I was previously responsible for (virtcfg), but I will try to keep a half an ear open to virtcfg issues.

Looking back on 2009: what I did

This has been a pretty exciting year for me – learning to deal with life as a
parent of two kids (it’s more than twice as hard as dealing with one, but just as enjoyable), spending 2 incredible weeks in NZ for Glynn & Jayne’s wedding,
getting myself a little more involved in the xVM codebase, working on a few
projects that taught me more about OpenSolaris, then for the last half of the year, learning Python & writing a
new tool to make xVM admin/config a lot more intuitive to Solaris administrators: virtcfg.
Combined with virtadm & its API written by johnlev, these
tools were a lot of fun to work on. I really hope they get to appear on
at some point.

However, one of my favourite moments of 2009 was getting to cross the finish line of the 2009 Dublin Marathon, and seeing what this meant to people other than myself (which was a bit weird, but altogether lovely)

Next year is lining up to be pretty exciting too:
we’ve been granted preliminary
approval for residence Visas to New Zealand
: a few more bits of documentation to
dig up, and we’ll be moving the family to the Southern Hemisphere to see how we get on. The exact time-frame has yet to be decided, but it’ll likely be the first half of 2010.

At the same time, an opportunity came up to try my hand at something new: in 2010, I’ll be moving positions within Sun – still contributing to
Solaris, but just not on xVM. I’m looking forward to having another codebase to dig into, and a
learning curve I’m going to enjoy swooshing happily along.

Looking back, things I’m not so proud about include my lack of time to support
and the vanishing

‘OpenSolaris news in review’
reports: I really hope
in the coming year these will get a new lease of life – any takers?

In the meantime, I hope you all have a wonderful 2010, and that whatever
happens, we all manage to end up this time next year having thoroughly enjoyed the ride.

bmc on DTrace visualisation at LISA ’09

Deirdre continues to produce excellent video – the latest being this one, of Bryan Cantrill talking about DTrace Visualisation at LISA ’09. I’m only 20 mins in, but from past experience, listening to Bryan is always entertaining and so far so good.

Having easy to access these videos is wonderful. They’re not shot in a studio, aren’t scripted, aren’t professionally lit, don’t have flashy effects, but are worth way more than any professionally produced material IMHO, because they’re all about content. Keep it up Deirdre!

New zfs-auto-snapshot implementation on it’s way

Niall wrote a post to the zfs-auto-snapshot alias announcing his new time-sliderd implementation of the ZFS Automatic Snapshot service.

I’m looking forward to this new implementation: I wrote the old ksh-based code back in 2006 and have been adding features & fixing bugs ever since. Over time, it’s started creaking at the seams – there were a few issues with it that were tricky to deal with in it’s existing implementation. I’ve long felt the desire to start again, but just couldn’t give it the time it needed. Well, Niall’s done just that – many thanks Niall!

I’ve got commentary on the thread on the auto snapshot mailing list and have also forwarded Niall’s announcement to zfs-discuss. The old README on this blog has been updated, with a pointer to the original heads-up message.

So now I get to focus on my day-job again :-)

Halloween 2009

Happy Z-Day everyone!

Breaking with tradition somewhat, I’m not sure there’s going to be any fireworks photos here this year: we’re down at my parents house, and are less likely to get the sort of sustained shelling that we normally experience in Raheny each year.

On the plus side, we had homemade pumpkin soup for lunch, so was at least able to get this shot. If there’s any fireworks later on, I’ll update this post – but otherwise, here’s some Halloween cheer:

The day’s been great so far – lovely birthday presents (a Merino base-layer and a copy of Neverwhere from the lovely missus, and DVDs of the first two Ice Age films from E (I think she had an ulterior motive there!) and a nice fleece from my folks)

I also popped out for a birthday run around Greystones this afternoon, only 10k but it was enough for me to realise I’m far from being back in running form: the recovery is going to last a few more weeks I think!

My folks are baby-sitting tonight, so myself and the missus get to go out for a grown-up dinner, which I’m really looking forward to. Happy Halloween!

Update: There were fireworks after all, here’s a few shots: fantastic, the tradition continues!

26.2 miles

I ran the Dublin City Marathon yesterday in approx. 3h 30m (more on that later) – that’s my first marathon, but I suspect not my last: to anyone even half-thinking of running 26.2 miles, you have to try it.

My motivation for running started several months back on a low note. The background was that I was increasingly working from home, with work being busy and having a desire to eat dinner with the family and be around to put the kids to bed, I was noticing that there were days where I wasn’t leaving the house at all. At the same time, the rumours started about Sun being in talks with various companies about a possible acquisition.

When the Oracle deal was announced it made things worse – what had previously just been rumours in the press became a lot more believable. Usually when something like this is going on, I’ll write my thoughts about it here: but doing so would have been unprofessional. The furthest I ever went was the occasional emoticon on my twitter feed.

The solution for both problems, I decided, was to get out in the mornings and run: exercise, and a little time to think.

As the months went by, I was running further and further, logging my progress to @timfoster as I went ,and generally having a whale of a time. One weekend, we had a few friends over for lunch – Kev, Nic, Mike & Maria. A bit of the conversation went something like this:

“That’s great running you’re doing Tim, you should do the marathon”

Well, that was it, I’d had vague thoughts of doing it one day, but hearing someone else say it out loud was enough to make me register that night – with some trepidation, I might add. The registration form grouped entrants into three categories in order of expected finishing time: 3h or less, 3:30-4:15, 4:15+. I had no clue where I belonged, so popped myself in the middle and got on with it – that was July 28th this year.

I figured I ought to be following some sort of formal training plan.
The athletics forum on was a great resource, and were pointing newbies like me to Hal Higdon’s running site . I chose the Intermediate II schedule, as I figured I was already reasonably fit with the daily commute on bike from time to time, and all the running I’d done so far. I’d missed a few weeks at the beginning of the formal program, so worked out where I should be (which turned out to be a good place to start given the running I’d already done) and stuck to it.

As I got closer to the race, I was dutifully doing my Long Slow Runs each weekend and was coming up to running the last of the three 20 milers when I became anxious about what sort of pace I’d manage in the race – could I go fast over a long distance? I didn’t know. I didn’t want to risk burning myself out in the early stages of the race, and end up not finishing. I decided to push it, and do a Long Fast Run instead – you’re not supposed to do this during training, I found out why. Yes, I discovered that I actually could manage a 7:15 min/mile pace over 20 miles, but I also managed to injure my leg in the process. To make matters worse, I’d miscalculated where I was joining the schedule, and it left me with only 2 weeks to taper before the race rather than 3, and most of those 2 weeks were spent just resting my leg rather than doing the suggested mileage.

With that, race day was upon me. I was aiming for a 3h15m finish – but given the last few weeks, this was probably unrealistic.

The atmosphere around the start was tense, but an amazing experience – very very well organised I thought: a record turn-out of 12,500 people this year.

After a lot of limbering up, and shedding of bin-liners, the starting gun fired. We moved very slowly at first, eventually getting past the starting line. The race started gently: I was in the middle of the 3:30-4:15 pen and first few miles were depressingly slow, difficult to overtake slower runners and I was already well down on my target. They say one of the most common mistakes by new marathon runners is starting too fast, so I kept repeating that to myself, and tried to stay calm.

By mile 5, I’d escaped the crowds in Phoenix Park and picked up the pace, passing Kev & Nic at mile 10 who were out cheering for me (thanks!!) and managed to keep pretty much on target till about mile 16 where I started slowing down, only to slow down further on miles 20/21 (the dreaded Roebuck/Foster Avenue hill) The missus & the kids, and Mum & Dad were cheering for me there, and I spotted my friend Barry too, who was marshaling for the race: familiar faces making the run a lot easier.

In general, the support from the crowd on the day was phenomenal: I really hope the people who got up early on an October Bank Holiday Monday appreciate what a difference them cheering really makes to runners – and, to the lady watching the race who gave me a jelly baby around Kimmage to whom I forgot to say “thankyou” – many many thanks, it was yummy, and much needed!

Things took a turn for the worse though around mile 22 – going down Nutley Lane, I felt a twinge in my thighs that I’d felt once before during training: the onset of cramp. I had to stop stretch/shake out my legs periodically, eat more jelly babies and start again. My times were tumbling now, and I watched in dismay as the 3:30 pacer balloons passed me on Merrion Road. For the rest of the race, I kept going as fast as I could manage, but it wasn’t enough.

Rounding the final corner onto Pearse St. I went for it, eating my remaining sweeties and telling myself it’d soon be over. I crossed the line, and stopped my watch, which told me 3:30:46. I was tired but happy – I’d missed the 3:15 target, but there’s always the next marathon.

As for official timing, I’m still a wee bit confused – the timing service that was tied to the chip on my bib told me I’d finished in 3:32:04, yet when I visit the results page on the Dublin Marathon website and search for my number (4871), at the time of writing, it confirms that chip time of 3:34:04, but tells me my finish time is 3:30:34. I know there’s a difference between chip time and gun-time, but I’d always thought gun-time should be longer than chip-time (as it doesn’t account for the time it takes you to actually reach the starting line, whereas chip time is registered from the moment you cross the start-line) Perhaps they got the numbers mixed up – 3:30:34 is closer to what was on my stopwatch. Anyway – it doesn’t really matter.

I had a ball on my first marathon – I finished 1516th out of a field of 12,500, which I’m happy about. I’ve got memories I’ll never forget and a belief in myself that I never had before. Sure, I’m walking around like Boris Karloff today (legs rather sore) and am wishing our house didn’t have stairs, but I’m extremely proud of what I’ve achieved. I think I’ll keep running and would strongly recommend everyone to have a go at completing a marathon: it’s a truly unforgettable experience.

Here’s the route map, and the splits from my watch – I missed a few mile markers here & there, so just rolled up those times into a single row. As you can see, I lack consistency here – so that’s something to work on for next time.

Mile Time by my watch (min:sec)
1 8:02
2, 3, 4 26:12
5 7:55
6 7:40
7 7:21
8 7:06
9 7:02
10 7:10
11 7:06
12 7:36
13 7:33
14 7:16
15 7:33
16 7:50
17 7:40
18 7:47
19 8:02
20 8:31
21 8:41
22 7:57
23, 24 18:34 (yeah, cramps started about here)
25 9:33
26 8:33
0.2 1:52

OpenSolaris at OSS BarCamp

I presented An Introduction to OpenSolaris last Saturday at OSS BarCamp – my contribution to Software Freedom Day 2009.

You can download the odp presentation or the pdf, which I’ve exported with my notes for the talk that explain each of the slides a little more – I hope this is useful if you’re planning on giving a similar talk.

A few things struck me while preparing for and giving the presentation. Firstly, it seemed odd to be giving an introduction to an operating system that’s been around for quite a while now: clearly we haven’t been doing enough of this sort of thing (and from personal experience, yes, ie-osug isn’t as active as I’d like, I just sadly don’t have the bandwidth)

Then secondly, it’s really hard to cover all of the interesting features of OpenSolaris in sufficient detail over the course of an hour. My take, was to try to whet the appetite, rather than explain every feature fully – and in some cases, go for the features I thought the audience might be interested in (for example, mentioning NWAM as one of the major networking features – perhaps it’s not as full of rocket science as other aspects of Solaris networking, but it makes a huge difference to the novice user)

Finally, and slightly embarrassingly, I had to spend about 5 minutes in front of an expectant audience futzing around with the display settings on my R500 laptop to get it to talk to the projector. It was doubly annoying that both xrandr (and indeed gnome-display-properties) were able to see the separate screen, but try as I might, I couldn’t get any output to appear externally. Ultimately, a kind audience member offered me a USB key by which I transferred the pdf over to my EeePC (running nv_122) which was able to see the projector, but didn’t have any of my demo material setup (some ZFS settings, some zones, crossbow, flows etc.) Oh well.

Here’s hoping that at least some of the audience left with an impression that OpenSolaris was worth taking a second (or perhaps a first?) look at, despite the brevity of my talk and the initial teething problems I had. My new mantra:

Never work with children, animals, or weird exernal VGA projectors.