Replacing the Application Packaging Developer’s guide

Over the last while, we’ve been writing a replacement for the venerable Application Packaging Developer’s Guide.

I started at Sun in the European Localisation group, and writing tools to create (and sometimes hack) SVR4 packages for Solaris was one of the things I had to do quite a bit – I found myself consulting the Application Packaging Developer’s Guide frequently. So, having the opportunity to help write its replacement was pretty cool.

Bart did a great job in defining the structure of the new book, and writing a lot of the initial content. Brock wrote some comprehensive content on package signing, SMF and zones, and I’ve been working on the guide since then, using some of the original documentation that Stephen wrote.

Unlike the previous guide, we have fewer examples in this guide. We feel that our man pages are better than the SVR4 packaging man pages were, and already contain useful examples. This guide is meant to compliment our man pages, not replace them.

The guide is a lot shorter than the old book – currently 56 pages, as opposed to the 190 pages in the document it replaces. Some of this is because of the fewer examples we have, but also we don’t have to write about patch creation, complex postinstall or class-action scripting or relocatable packages. IPS is simpler than SVR4 in many ways, though there is a learning curve, which this book aims to help with.

There’s more work to do on the book (as you can tell by the “XXX” markers that appear throughout) but what we’ve got so far is here: source code, and here, inelegantly rendered as a pdf.

If you use IPS, or have had experience in packaging software for Solaris previously, I’d be interested in hearing your thoughts on the content so far.

Updated: I’ve updated the PDF with the latest content as of today, and made sure there’s a datestamp on the file, so you can tell which version you’re reading. Changelog below (though I’ve not put the source back to the gate, yet, so these changesets will likely be collapsed)

changeset:   2576:6b14edfc38b5
tag:         tip
user:        Tim Foster 
date:        Fri Oct 21 09:49:15 2011 +1300
	Add another example to chpt14, republishing under a new publisher name

changeset:   2575:0cf619c409b1
user:        Tim Foster 
date:        Thu Oct 20 15:51:22 2011 +1300
	complete the SVR4 conversion example in appendix2

changeset:   2574:3a671202fd35
user:        Tim Foster 
date:        Thu Oct 20 12:09:48 2011 +1300
	Rewrite the var/share discussion in chapter 10
	Add macros and use them throughout the book
	Add a datestamp and logo to the cover page
	use hg identify for versioning
	add more text explaining why we don't "cross the streams"
	Add note on Immutable Zones to chapter1
	Nobody uses fax machines anymore, do they?
	Add text on pkg.human-version to chapter 3
	Add content for the user action
	Add content for the group action
	Introduce the constraints/freezing section of chapter 6
	Reference constraints/freezing section in chapter 7
	Describe XXX sections more clearly

Updated: I’ve added more stuff, replaced the links, but not yet pushed these changes to our external gate. Changelog below.

timf@linn[626] hg log -r 2577:
changeset:   2577:83de3ed97341
user:        Tim Foster 
date:        Fri Oct 21 16:23:37 2011 +1300
	Break up chpt5
	Move XXX for versioning disussion to chpt3
	Fix table of contents

changeset:   2578:df045bbafc98
user:        Tim Foster 
date:        Fri Oct 21 17:19:22 2011 +1300
	Write content for mediated links, with an example

changeset:   2579:1c3d87d950e6
tag:         tip
user:        Tim Foster 
date:        Fri Oct 21 17:26:50 2011 +1300
	Move XXX for versioning disussion to chpt3 (fix build)

Updated: Almost at the final version – there’s a few small changes to make, but I’ve updated the links to the pdf of that version now, which hasn’t yet been pushed to the gate. Too many commit messages to indicate what’s changed unfortunately.

Updated: More minor changes and some style/branding tweaks. I’ve updated the pdf links above (2011-11-07:16:40:12-NZDT 8e2ee40e0bfb tip)

Updated: seems to be having issues. So in the meantime, there’s a local copy here


FeedPlus – creating an Atom feed from a G+ public stream

Original image by hapinachu on Flickr

I had been using Russell Beattie’s PlusFeed application to create an RSS feed that I could pass to TwitterFeed, in order to syndicate my G+ posts over to Twitter.

It wasn’t perfect, but was good enough to do the job.

Unfortunately, Google recently changed the pricing for AppEngine, which led Russell to turn off the service.

I wasn’t able to find any good alternatives out there, so quickly rolled my own. It takes the id of a G+ user, and spits out an “atom.xml” file into the directory you choose, which you can then send to anything that can consume Atom.

The code is up on GitHub if anyone’s interested. My code isn’t perfect either, but will do the job until something better comes along: FeedPlus.


Update, Dec 13th 2011: I’ve updated the app to use the offical G+ API now, rather than relying on the json data that Russell’s app relied on. Hopefully this will make it a little more stable. Next stop, add an option to have it post to Twitter directly, rather than just writing atom.

Please unsubscribe me

As promised here I spent the last week avoiding Twitter and Facebook, giving Google+ a go.

Summary? I think it’s fabulous.

I like the idea of Circles, I like the user interface and have yet to start to accumulate the amount of crap I was tending to get on Facebook (pointless games, updates from people who clicked on spammy “see who’s been watching your profile” apps, that sort of thing)

It’s missing some stuff so far that for me makes Twitter the better option if you’re looking to follow interesting people, or look for posts about particular topics but don’t necessarily know who those people are, or who’s writing about a particular topic of interest: #hashtags. Being able to tag posts would be useful too, because while I like being able to view just a set of posts from a given Circle of friends, not everything those friends write about is necessarily relevant to me.

I’ve a few other nits with G+ that I’d like to see improved as well, including better client support on Android, a public API (which will help towards the first item, no doubt) but I’m happy to be patient.

It’s early days with Google+, though what I’ve seen so far makes me think it won’t be long before they’ll have the extra features that seem like an obvious addition to any social networking application.

So, what’s my plan with Twitter and Facebook?

Well, the week-long experiment on Google plus wasn’t long enough for me to choose whether or not to keep using one of those other services. My plan for now, is to hang on to my Twitter account, @timfoster and essentially treat it as read-only, except for direct messages and any replies I feel like posting to other folks on Twitter.

I’m going to try to use +Tim Foster for everything else.

As for Facebook, I’m going to give it up completely – I really only started using it to connect with the people that I know who don’t use Twitter, and seeing as there’s a better application out there now, it seems like Facebook has outlived its usefulness. I’d honestly rather one corporation had a detailed map of me than two (not that I can be particularly precious about this: I’ve got an Android phone, use Gmail – so in a sense, I’ve already signed my soul over to Google)

So, I’m closing my Facebook account as of next week, and stopping the Twitter re-publication now (I rarely ever actually post on Facebook, usually it’s just my Twitter feed getting redirected there)

If you would still like to keep up with me, get onto Google Plus – I’ve a few invites left which I’ll give to you if I know you, but I’m guessing they’ve got to be close to opening the service for anyone who wants in, invite or not. Otherwise, join Twitter.

2011 Armstrong Motor Group Wellington Marathon

I ran the 2011 Wellington Marathon on the 19th of June – that was my 3rd marathon, and without doubt, the toughest one I’ve done to date. As with the other races that I’ve
run, here’s a writeup of my experiences.

There’s also an official race report, if you’re interested.

Going into the build-up for the race this year, my goal was to try to beat last year’s time, but really I wanted to see if I had a sub-3hr marathon in me, I knew it’d be a stretch, but I was going to give it a try.

With that in mind, I stepped up to the next level of training, getting serious about it. As with earlier years, I thought I’d stick with Hal Higdon’s training program, but this time going for the Advanced – I plan. Over and above the “Intermediate – 2” plan that I followed last year, this one added speed-work, hill repeats, tempo runs, and an extra day of running.

So, for 18 weeks during the lead up to the marathon, I was running 6 days a week, which was extremely grueling – not just for me, but also for my long-suffering family. They were incredibly supportive during the training, thanks Bob, Ella & Calum!

As before, I took a unit-test-inspired approach, printing out the training plan and sticking it to our fridge, with an aim of turning it green. Days when I just couldn’t be bothered running would get a red mark, days when I was unable to run, either because I was sick or because of work-pressure would be highlighted in yellow. Otherwise, I got on with the training, and slowly coloured each square green when I could. Ella was happy to help colouring too, she did a great job!

Overall, I was happy with the training progress – here’s the completed chart. I was sick for about 10 days in total, and if you match up pkg(5) integration dates for the projects I was working on, you can probably spot where work got in the way of running :-)

During training, I was getting my Yasso repeats up to the point where I was definitely in the right ballpark for a sub-3hr finish.

As I got closer to race day though, I became uncertain whether I’d be able to run the pace I was aiming for over the full distance. The longest I’d run at full race-pace was 16k. Still, I reasoned that those 16k were very hilly during training, so perhaps on the flat, I’d be alright. There was only one way to find out.

Race day, like last year, was cold and wet – a marathon at the end of June is in the depths of winter here. We lined up on the concourse of the Westpac stadium at 7:30am ready for the off.

My first few kilometers were slow as I moved through the pack, and I only looked at my watch as we approached Te Papa seeing that I was a little behind. I picked up the pace a bit, before settling into my race-pace.

The wind around the Wellington bays was strong and gusty, and like last year, I drafted when I could and took my turn. Getting out towards Shelly Bay, my 10k time was looking good – there was another runner doing about the same pace as me so we hung on and kept each other going. At this point, I was running a pace of between 4:01 and 4:12/km (depending on wind and water stations, I guess) which should have had me finishing around 2:52:00 – bang on the pace I was hoping for.

Thinking about pacing before the race, I had eventually decided that I’d go out with a really aggressive pace, aiming for well under a 3hr finish, rather then being more conservative and potentially making it over the line in 3:01 or thereabouts. While just scraping over the line sub-3hr would have been great,
I’d hate to have put in all that effort, and only miss my goal by a few seconds.

The question I faced was, was I willing to potentially blow my chances of beating last year’s time at all or risking the dreaded “DNF” (Did Not Finish), just to avoid the anguish of barely missing a sub-3hr finish? Yes, I was.

At the half-way turn-around, things were still going well. I was keeping to my pace, and feeling quite good about progress so far, but they say the first half of a marathon is just about transport – the race really begins at the halfway mark.

My pace slowed slightly leading up to the 30km mark and then I hit the wall – or rather, my legs did.

With only 10k to go, I started feeling slight twinges in my calves, which made me extremely worried. I tried easing up on my pace a little, to give my legs a chance. Apart from my legs though, I still felt good at this point – I wasn’t tired or out of breath, and definitely had fight left in me.

Unfortunately, things only got worse from here. As I made it back along the route we’d taken earlier, now joined by the half-marathon runners, the cramps in my legs became worse. At water stations, I’d drop to walking pace, trying to struggle back up to at least a 5:00/km pace when I started running again, but the cramps persisted and continued to get worse, as did the pain.

Partly it was pain in my legs, which was getting really intense at this point, but also it was from seeing my goal-time slipping away from me: there was nothing I could do about it.

The last 5k were gruesome – I just wanted to finish the race at this point, and didn’t care what sort of time I’d finish in. Finally limping over line, I was happy it was over, but depressed at the same time.

As it turned out, I did beat last year’s time of 3:12:39, with a 3:11:37 finish. So, was all the training leading up to the race worth a finish time of only 62 seconds faster than last year?

Well, I can’t pretend that I wasn’t hugely disappointed with my result, but at the same time, I keep telling myself that a 3:11 marathon finish isn’t shabby by any measure. I know I’m faster than that, but I’m still weighing up whether I want to try to prove it or not next year.

In the meantime, I’m taking a break from running – give it a few months and perhaps I’ll be back on the roads again, but for now, I’m enjoying the extra free time I suddenly seem to have!

Race results, and splits from my watch during the race are below:

Tim Foster – bib number 224
Division M18-39
Net time 3:11:37
Net place 25th of 370 total finishers
24th of 259 male finishers
16th of 136 M18-39 finishers
Average pace 04:32/km
Km split cumulative time
1,2 8:48
3-5 15:39 24:27
6,7 7:51 32:19
8,9 8:10 40:29
10,11 8:25 48:55
12,13 8:28 57:24
14,15 8:04 1:05:28
16,17 8:03 1:13:31
18,19 8:16 1:21:48
21.2 4:34 1:26.22 ( half way marker )
22 3:40 1:30:02
23,24 8:20 1:38:22
34,25 8:31 1:46:54
26,27 8:27 1:55:22
28,29 8:43 2:04:05
30,31 9:45 2:13:51
32,33 9:13 2:23:05
34,35 10:58 2:34:03
36,37 11:21 2:45:25
38,39,40,41,42.2 49:17 – that’s wrong, I forgot to stop my watch

3d-printing and running shoes

Here is a selection of the last three pairs of running shoes I’ve owned:

– they’re all Asics Nimbus x, and all from successive years. I’ve trained for and run a marathon in each of the last two pairs, and am currently putting my most recent pair through their paces with another race in the next few weeks. I’m running 6 days a week at the moment on the road, with a maximum of about 50 miles/week. I really like these shoes – honestly, no complaints and no injuries to speak of.

The thing is, it’s getting to be fairly obvious to me where my runners wear out the fastest – just on the outside edge of the heel, and at a fairly consistent area on the midsole. (yes, I know I should have changed the red & white pair well before they got into the state they’re in)

So, here’s my idea: could someone please come up with a mechanism that allows me to present these old shoes to my running shop the next time I’m getting a new pair so that they can do a quick 3d-scan of the old shoes, then send the scan off to the factory along with my weight and height, and a brief description of the sort of running I do.

There, they would take the scans, run them through a CAD system, do some magic, then 3d-print a new pair for me with slightly harder-wearing rubber on the bits that wear out most quickly for me. They’d send them back to the shop a few weeks later and I’d pick them up. I would even pay slightly more for this, if it means my runners don’t wear out as quickly, and they’ll have a loyal customer for life.

Yes, I admit to the possibility that perhaps harder-wearing parts of the sole might mean less cushioning in those parts, and that maybe the runners are designed to fall apart at the moment, in the interests of protecting my joints? Still, it’d be an interesting experiment I think.

Focaccia recipe (incl. potato)

I’ve had a few people asking for the focaccia recipe I mentioned here, so here goes.

It comes from Annabel Langbein‘s The Free Range Cook and someone has helpfully transcribed it here.

Other than the description above, Bob mentions that you should mix the extra virgin olive oil together with the potato using a fork until it’s smooth before adding it to the dough.

Yes, potato in a focaccia recipe – some say this betrays our Irish roots, they’re probably right, but it makes for pretty fabulous bread.

pkgsend and SVR4 packaging

Original image by Richard 'Tenspeed' Heaven

One of the design-goals for IPS was that scripting should be moved out of the packaging system, preferring scripting to only ever occur on the environment the software was intended to run in (eg. after boot, rather than at install time) – Stephen talks more about that here.

Related to that, I’ve had a chance to do a bit more work on pkgsend(1) recently to help developers through this transition. With the putback of:

changeset:   2276:d774031d55bd
user:        Tim Foster 
date:        Thu Mar 24 09:56:10 2011 +1300
	17826 pkgsend should warn for potentially problematic SVR4 packages
	17891 pkgsend could convert more parameters from a SVR4 pkginfo file
	17905 pkgsend should return a better warning for multi-package datastreams

pkgsend is now more helpful in how it deals with SVR4 packages.

Up till now, when converting packages from SVR4 to IPS, pkgsend has ignored any preinstall, postinstall, preremove, postremove and class-action scripts that may have been present in the SVR4 package. So while this would give users an installable IPS version of their packages, there’s a good chance that the packages wouldn’t have functioned properly, as the scripts would not have run.

With this small change, we now report errors when encountering scripting, giving the package developer a heads-up that a little more work is needed in order to properly convert their package and will tell them exactly what things need attention. For example:

timf@linn[29] $(pkgsend open test/example@1.0)
timf@linn[30] pkgsend import ./SUNWsshdr
pkgsend: Abandoning transaction due to errors.

pkgsend: import: ERROR: script present in SUNWsshdr: postinstall
pkgsend: import: ERROR: class action script used in SUNWsshdr: etc/ssh/sshd_config belongs to "sshdconfig" class

Investigating this particular sample package (an old S10 version of ssh) the postinstall script makes sysidconfig do ssh host-key-generation, (something that’s now done automatically by the SMF method script for ssh) and the sshdconfig script removes any “CheckMail” entries from any preserved sshd_config files previously installed.

Along with these changes, we’re also converting more information from the SVR4 pkginfo file – populating the package description and summary fields automatically, and creating pkg.send.convert.* attributes for other pkginfo parameters that may have been defined.

The expectation is that package developers will change those attribute names to a name that better suits their needs, either by editing the manifest that pkgsend generate produces for you from the SVR4 package, or by using pkgmogrify(1).

Hopefully these changes make pkgsend a little more helpful to developers when converting their SVR4 packages over to IPS.

pkgdepend improvements

I should have written about this a few days ago, but better late than never.

With the putback of:

changeset:   2236:7b074b5316ec
user:        Tim Foster 
date:        Tue Feb 22 10:00:49 2011 +1300
        16015 pkgdepend needs python runpath hints
        16020 pkgdepend doesn't find native modules
        17477 typo in pkgdepend man page
        17596 python search path not generated correctly for xml.dom.minidom
        17615 pkgdepend generate needs an exclusion mechanism
        17619 pkgdepend generate is broken for 64-bit binaries when passing relative run paths

pkgdepend(1) has become better at being able to determine dependencies. I’d done some work on pkgdepend before, and it was nice to visit the code again.

To those unfamiliar with the tool, I thought I’d write an introduction to it (which I should have written last time).

pkgdepend in a nutshell

pkgdepend is used before publishing an IPS package to discover what other packages are needed in order for the contents of that package to function properly. The packaging system then uses those dependencies whenever a package is installed to automatically install those dependencies for you.

During the creation of a package, the process of running pkgdepend on your manifests is broken into two phases, each with its own subcommand.

pkgdepend generate

The first is called ‘generate’. This is where the code examines each of the files you’re intending to publish in your package. Depending on the type of file it is, we look for clues in that file to see what other files it may depend on.

Those clues could be as simple as the path that comes after the ‘#!’ in UNIX scripts (so for a Perl script with ‘#!/usr/bin/perl’ at the top of it, obviously you need to have Perl installed in order to run the script) or could be complex, such as digging around in the ELF headers in an ELF binary to find the “NEEDED” libraries, determining Python module imports in a Python script, or looking at ‘require_all’ SMF services in an SMF manifest.

There’s a list of all the things used so far to determine dependencies in the pkgdepend(1) man page.

Once pkgdepend has gathered the set of files it thinks should be dependencies for the files you’re delivering, it outputs another copy of your manifest, this time with partially complete ‘depend’ actions.

I say partially complete, because all we know at this stage, is that your package will need a bunch of files in order for it to function properly: we don’t yet know what delivers those files. That’s where the second phase of pkgdepend comes in: dependency resolution.

pkgdepend resolve

During dependency resolution, via the ‘pkgdepend resolve‘ subcommand, we take that partially complete list of depend actions, and try to determine which package delivers each file the package depends on.

In order to do this, pkgdepend needs to be pointed at an image populated with all the packages that package could depend on – in most cases, the image is simply the machine you’re building the packages on, (remember, in IPS terms, every package is installed to an “image”: your running copy of Solaris is itself an image) though you could choose to point ‘pkgdepend resolve‘ to an alternate boot environment containing a different image.

Assuming we’re successful, you are then presented a version of your package with all dependencies converted from just the filenames needed to satisfy each dependency, to the actual packages IPS will install for you in order for your package to function.

Things that can go wrong

I say “assuming we’re successful” because, unfortunately, sometimes we’re not.

There are several things that can go wrong:

  • an ELF header entry could be incorrectly specified at build time, or could contain $VARIABLES that pkgdepend doesn’t know how to expand
  • a file might be delivered by multiple packages on your system, in multiple places
  • a python script might modify sys.path, a shell script might modify $LD_LIBRARY_PATH, etc.
  • we could deliver scripts only meant to be read, not run (demo scripts, for example) which could cause either fake dependencies, or dependencies which could never resolve

All of the things above can result in error messages from pkgdepend, where it’s unable to determine exactly what we should be depending on – this is the part of pkgdepend I was trying to fix in my putback.

It fixes a few bugs in pkgdepend when dealing with Python modules and kernel modules, and it introduces two new IPS attributes:

  • pkg.depend.bypass-generate
  • pkg.depend.runpath

The first, pkg.depend.bypass-generate, is used to specify regular expressions to files on which we should never generate dependencies. This gets us around the cases where multiple packages deliver files in several places, or where $VARIABLES aren’t being expanded. Bypassing dependencies this way is good, though you do need to be careful where and how you apply it — if you bypass a legitimate dependency, then there’s a good chance your package won’t function properly if the packages it depends on aren’t installed.

The second, pkg.depend.runpath, is used to change the standard set of directories that pkgdepend looks in, per-file-type in order to search for file-dependencies. This gets us around the case where programs are installed in non-standard locations.

What’s next?

Alongside this work, I’ve been doing work on the ON package manifests to greatly reduce the numbers of pkgdepend errors being reported during the ON build. (sadly, I can’t share the work on the ON manifests, but they will go back once snv_160 is available internally. If you’re an ON engineer there’ll be a Flag Day attached to this, making snv_160 the minimum build on which you can build the gate) Quite soon after that, we’ll be able to enable error-reporting from the pkgdepend phase of the build, and that will be fabulous.

I’d strongly encourage those working on Illumos and other derivatives of the OpenSolaris codebase to investigate the new pkgdepend functionality, and put in the time to get their gate pkgdepend-clean too.

Why? Well, in my view, one of the problems with SVR4 packaging was that it lacked any sort of automatic dependency analysis. This meant that packages declared manual, often-bogus dependencies on other packages – and dependencies that aren’t correct make minimisation of systems very difficult.

When we determine dependencies automatically, minimisation becomes a lot easier.

Crucially, so does package refactoring: if we split or merge packages, so long as those new packages are installed on the image being used to resolve dependencies, the packages that have dependencies on those split/merged packages automatically pick up the new package names the next time they’re published.

However, without actually checking the exit status from the pkgdepend phase of the build, you’re having to insert more manual dependency actions than should be strictly necessary, and that’s a bad thing.

Of course, sometimes we can’t avoid inserting manual dependencies – pkgdepend isn’t finished yet, and there’s more we could be doing to determine dependencies at package publication time, however the tool does make life a lot easier. So, if you’re ever tempted to insert a manual dependency into your package, please do think carefully about it, and please add a comment to the manifest explaining in detail why that manual dependency is really required.

Looking back on 2010: what I did

2010 was a busy year for myself and the family. Here are some of the things I got up to:

  • Moved job

    I joined the IPS team in January

  • Moved company

    Sun Ireland disappeared in April, and I became an Oracle employee. Personally, that’s working out very well and I’m not noticing much difference in terms of my day to day work. Yes, the company has a different culture – and if there were a “Top Gear Cool Wall” for IT companies, in my opinion, Oracle would likely be “Seriously uncool“, with Sun as “sub-zero“. I think the Solaris engineering culture has survived the transition so far, at least at my level. We’ll see what the New Year brings.

  • Moved country

    I moved from Ireland to NZ in May, having gone through the long process of applying for residency here. That this coincided with the economic conditions in Ireland was purely chance. Our decision to move was not related to work or the economy, it was purely based on a willingness to stretch our horizons and see some more of the world. So far, NZ is treating us well – ups and downs.

  • Ran another marathon


  • Was mentioned in a piece by the Irish Times

    (albeit with the errors that they used my wife’s maiden name, and mis-credited my photo to Alan Betson, though I’m happy to have my photographs confused with his, I’m really not in the same league :-) )

  • Did a total of 24 putbacks to the pkg-gate.

    This covered 66 bugs or RFEs – some were minor and didn’t take a lot of work to get right, like 12723, some were major like 13536 and were projects in themselves. All were a lot of fun to work on though, and I feel privileged to continue to be employed on something I enjoy.

Looking forward to 2011, I’m hoping to explore more of NZ. If you follow me on Twitter, subscribe to my Flickr feed or look at some of the stuff I’ve put up on Smugmug, you’ll see some of the photos I’ve taken this year: this country really is incredibly beautiful – indeed, as is Ireland. You should visit both places, then pick which one you want to live in.

There’s lots to do next year – we’re moving house in a few weeks time, and I get to set up a new home office, in a separate building to the house – so I’m hoping my struggle for a healthier work/life balance will start tipping in the right direction.

I’ll also try to keep up the running – possibly entering the Wellington marathon again to see if I can better this year’s time, though being honest, I’m not sure I’ve got a faster pace in me – I’ll aim to improve though. Having recovered from the marathon, I’m doing a lot less road-running and a lot more mountain/off-road running now so perhaps it’s time to look for a different sort of event instead?

Roll on 2011 :-)

Scripting languages as OS core components

"A Shell Script wants your job", Original image by jm3 / John Manoogian III

There’s been some crowing recently about how wonderful it is, that a scripting language is no longer a dependency for an OS build.

My opinion is that this is a shame on many levels: it’s a shame because the time spent rewriting all of this code could have been better spent elsewhere, it’s a shame because this new code presumably has integrated a heap of new bugs and it’s a shame because the bar was raised for potential contributors to their codebase: if you don’t know C, you can’t write code for them.

Most importantly though, I believe that scripting languages have a better place in /usr/bin and as helper components for core OS functionality than some folks seem to believe.

I’ve been writing in Python for a few years now, first as part of our Xen port, now on IPS, and I think that the sorts of things most OS commands do is far easier to express, code and debug in Python than it is in C.

Perhaps it’s me, but I’m much more comfortable firing up an editor and debugging a script I can see, than I would be having to download and setup a complex build environment, compile sources (if that source is even available :-/ ) and drop binaries in place.

Excising Python from an operating system is like chopping off your arm. Sure, you’ve fewer dependencies now (the original reason cited for removing the code in question) – no need for wrist watches, and you won’t get worn patches on your jumpers at the elbow, etc. but I’m not convinced it was the right move.

Still, each to their own – this is merely my opinion.

pkg history enhancements

Original image by Kristian Bjornard (bjornmeansbear)
With the putback of:

changeset:   2135:6eeb55920e13
tag:         tip
user:        Tim Foster
date:        Wed Nov 10 11:32:32 2010 +1300
	3419 history command -- ability to specify dates or date ranges desired
	17012 pkg history should include boot environment or snapshot information
	17013 pkg history subcommand should display boot environment and snapshot information
	17222 pkg history could use a -o option

the pkg ‘history’ subcommand now has some new features.

We now record the name of the boot environment the operation was applied to, any ZFS clones created, and any ZFS snapshots taken during the course of the operation.

In addition, pkg history can also accept a comma-separated list of column names to print different output. The known column names at the moment are:

The name of the boot environment this operation was started on
The name of the client
The version of the client
The command line used for this operation
The time that this operation finished
The user id that started this operation
The new boot environment created by this operation
The name of the operation
A summary of the outcome of this operation
Additional information on the outcome of this operation
The snapshot taken during this operation. This is only recorded if the snapshot was not automatically removed after successful operation completion
The time that this operation started
The total time taken to perform this operation (for operations that take less than a second, “0:00:00” will be printed)
The username that started this operation

The old “result” column has been split into “result” and “reason” to preserve field formatting, and the old “time” column has been renamed to “start”. The “time” column now contains the total operation time (“finish” – “start” times) – I figured, that calculating the total operation time might be useful, rather than expecting users to do it manually.

Finally, pkg history gets a ‘-t’ flag, allowing users to specify a comma separated list of dates, or ranges of dates they’re interested in. Previously users could only choose to see all events or the last ‘n’ events with the -n flag.

I really like the history subcommand – I’ve found that being able to see over time which packages have been installed and removed from the system, and which operations have failed or succeeded is extremely useful. Being able to find detailed information about how packages have been managed over time gives quite an insight into how people use software. It’d be interesting to use this as input on deciding how to craft custom distributions of Solaris that contain the software that people use in the real world.

We didn’t have a history function in SVR4 that I’m aware of – another point in favour of IPS. History Lives on in Historic Historyville!

Fireworks, 2010

For the past few years, every Halloween, we’ve had fireworks in Dublin. I go out, take photos, post them on my blog along with a bit of a commentary on what I got up to during the day. I generally also give a nod to Z-Day along the way (back in 2005, I was working on the ZFS test team when we integrated into Solaris, something I still feel proud to have been a part of)

This was the first year in a long time that I’ve not done that, and while my birthday was great in every other way, I really missed the fireworks. Halloween just isn’t a big thing here in NZ, and I think that’s a shame. In or around my birthday, since the day I was born, there’s always been something extra special in Ireland – fireworks, a spooky atmosphere, and some pretty wonderful parties (I like to think Tim’s Halloween Fancy Dress parties had a bit of a reputation, back in the day)

It turns out that NZ, with it’s colonial background, has more celebrations for November 5th than it does for Halloween, and Wellington seems to go all-out on the fireworks front. So, armed with my 70-200 f4 (no tripod, I enjoy the swirling pattern that you get from handheld fireworks photos) I took off up to the hills of Khandallah this evening, ditched the car wherever I could find some space, and rushed to find a vantage point. I didn’t get the best spot, but I was reasonably happy with the photos I got nonetheless.

The SmugMug album is here, and I’ve thrown in a few thumbnails below (if you want them bigger, go for the SmugMug link instead). I’m looking forward to next year’s fireworks already, wherever I’ll be at that stage.

Talking about IPS in Wellington

I gave a talk about IPS yesterday to a group of sysadmins from Oracle customers here in Wellington, New Zealand.

The talk itself was limited to 30 minutes, and was based on a much longer internal presentation my colleagues Danek & Bart gave earlier this year to the kernel engineering group at Oracle.

The slides for the talk are here.

I think I tried to squeeze a bit too much content in, barely having time to breathe during the talk. I covered the main reasons why sticking with SVR4 packaging & patches is a really bad idea (with this audience, that felt like I was preaching to the choir).

I covered the basic design assertions behind IPS, then went on to talk about actions, dependencies, variants and facets.

If given the chance to do such a talk again, with similar time constraints, I think I’d simply sit down at a command line, and walk through the various pkg(5) command line tools, talking about the details of the packaging system along the way.

As ever though, the best conversations came after the talk was finished – I talked to several customers who’ve been using Solaris 10 and Zones, have experienced patching them, and were keenly interested in getting something better. They seemed to be happy with the direction we’re going in.

I pointed them to the documentation we’ve got up on the project web page, and encouraged them to have a look at older OpenSolaris builds if they wanted to get a preview of IPS as it will appear in Solaris 11.

They asked how complex it would be to convert between SVR4 packages and IPS packages – ‘pkgsend generate‘ is a good start here. That led to a good discussion of how we’ve been converting post-install scripting in Solaris itself over to SMF services that run once on boot, allowing you to be confident that your scripts will run in the environment they were intended to run in.

All in all, I felt good about the talk, but would definitely have liked more time, if only so that I didn’t have to gloss over as much potentially interesting detail as I did.

For my troubles, I was given a rather nice Oracle Solaris coffee cup – thanks Rob!


Some pocket lint, found in my jeans when I did this putback.

With the recent putback to the IPS gate:

changeset:   2046:2522cde7adc2
tag:         tip
user:        Tim Foster 
date:        Thu Aug 26 13:11:20 2010 +1200
	13536 We need a way to audit one or more packages
	15860 publication api needs auditing phase
	15862 pkglint tool needed aid in package creation and auditing
	16828 ProgressTracker should make it easier for others to interleave output
	16875 we should be able to execute tests directly from the source
	16800 pkglint should allow signature actions in obsolete and renamed manifests

we now have pkglint(1), a tool that can check package metadata for common errors before publishing. We never really had an equivalent for SVR4 packages, although many have written scripts to do so. The pkglint man page documents how the tool works.

Out of the box, the below checks are performed on manifests, either retrieved from a repository, or passed as local files on the command line. It’s also pretty easy to extend pkglint(1) with your own checks (details in the man page) If you think there might be something missing out of this default list, do please let me know.

timf@linn[2808] pkglint -L
pkglint.action.005           pkg.lint.pkglint_action.PkgActionChecker.dep_obsolete
pkglint.action.003           pkg.lint.pkglint_action.PkgActionChecker.legacy
pkglint.action.005           pkg.lint.pkglint_action.PkgActionChecker.license
pkglint.action.001           pkg.lint.pkglint_action.PkgActionChecker.underscores
pkglint.action.004           pkg.lint.pkglint_action.PkgActionChecker.unknown
pkglint.action.002           pkg.lint.pkglint_action.PkgActionChecker.unusual_perms
pkglint.action.006           pkg.lint.pkglint_action.PkgActionChecker.valid_fmri
pkglint.dupaction.002        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_drivers
pkglint.dupaction.006        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_gids
pkglint.dupaction.005        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_groupnames
pkglint.dupaction.008        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_path_types
pkglint.dupaction.001        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_paths
pkglint.dupaction.007        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_refcount_path_attrs
pkglint.dupaction.004        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_uids
pkglint.dupaction.003        pkg.lint.pkglint_action.PkgDupActionChecker.duplicate_usernames
pkglint.manifest.005         pkg.lint.pkglint_manifest.PkgManifestChecker.duplicate_deps
pkglint.manifest.006         pkg.lint.pkglint_manifest.PkgManifestChecker.duplicate_sets
pkglint.manifest.004         pkg.lint.pkglint_manifest.PkgManifestChecker.naming
pkglint.manifest.001         pkg.lint.pkglint_manifest.PkgManifestChecker.obsoletion
pkglint.manifest.002         pkg.lint.pkglint_manifest.PkgManifestChecker.renames
pkglint.manifest.003         pkg.lint.pkglint_manifest.PkgManifestChecker.variants
opensolaris.action.001       pkg.lint.opensolaris.OpenSolarisActionChecker.username_format
opensolaris.manifest.001     pkg.lint.opensolaris.OpenSolarisManifestChecker.missing_attrs

Excluded checks:
opensolaris.manifest.002     pkg.lint.opensolaris.OpenSolarisManifestChecker.print_fmri

Over the coming weeks, I’ll be addressing some additional bugs and RFEs for pkglint. Once we’re sure it’s stable, I hope to start working with the right folks to see if we can get pkglint(1) runs performed on their gates during their builds.

Many thanks to everyone who helped code review and provide feedback – it was very much appreciated!

Setting up my IPS development environment

There’s probably a small number of people who’ll find this useful, but this is one of those scripts that I’ve had kicking around for ages that I use daily, so thought it was worth a mention here.

This sets up a developer environment for IPS, pointing $PATH and $PYTHONPATH to the right place. Just run it from anywhere beneath any of your development workspaces, and you’ll be set to run pkg from the proto area of that workspace.

timf@linn[990] echo $PATH
timf@linn[991] $(ips-env)
timf@linn[992] echo $PATH
timf@linn[993] echo $PYTHONPATH
timf@linn[994] cat ~/bin/ips-env

function find_proto {
	while [ -z "$PROTO" ] && [ "$PWD" != "/" ] ; do
		if [ ! -d "$PWD/proto/root_$(uname -p)/usr/lib/python2.6/vendor-packages/pkg" ] ; then
			cd ..
	if [ -d "$PROTO" ] ; then
		echo $PWD/proto


if [ -z "$PROTO" ] ; then
	echo "No proto found at or above current directory" > /dev/stderr
	exit 2

echo export PYTHONPATH=$PROTO/root_$(uname -p)/usr/lib/python2.6/vendor-packages
echo export PATH=$PROTO/root_$(uname -p)/usr/bin:${PATH}
echo export SRC=$PROTO/../src

2010 Harbour Capital Marathon

I ran the Wellington Harbour Coast Marathon yesterday, finishing 32nd overall, 24th in my category with a time of 3:12:39 – results here. I’m thrilled with the result, and thought a race write-up was in order.

Some beaten up runners, my medal and a source of encouragement

It was a very early start – with the gun going off at 07:30, I was up at about 05:40 in order to get some breakfast in and give it a bit of a chance to get digested. I was in bed early the night before, and the night before that, but didn’t sleep much on Saturday. Still, I’d watched the best pre-marathon advice video I’ve seen, here and was as prepared as I could be. The weather for the day, was a 10°C high, 7°C low, a 37kph southerly and 87% humidity, according to the race website and it felt pretty bleak as the missus gave me a lift to starting area.

We made it to the Westpac Stadium for about 06:40, checked in my bag of clothes for after the race, donned my bin liner and started some gentle stretches & warm-ups.

After the pre-race briefing, we headed out into the pouring rain to the stadium concourse, and lined up at the start. It was a much smaller field than Dublin last year, with 387 people taking part. The half marathon and 10k races, held on the same course that day, had 1,600 and 1,162 finishers respectively and the walks added a few more numbers to those taking part: about 4,000 people in total.

The course itself was an out-and-back route, around the Wellington harbour and bays as far as Breaker Bay and back again. It was cold, dark and windy to begin with, and while the rain eased off a little after a few kilometers, the wind was pretty constant. Thankfully, this was a blessing & a curse, as there was a good chance the wind you fought in one direction would give you a gentle push on the return journey. I drafted as much as I could, and was happy to reciprocate.

My aim for the run was to beat my previous marathon time of 3:30, with a stretch goal of running a 3:15 – during training, I was racking up long runs at 4m 30s/km so felt I might be able to pull it off.

I ran the first 10k in 47:12 a little slower than my ideal pace, but had reached halfway point at 1h37m – bang on target. Then, I picked up the pace a little, with a bit of a mental boost coming from running back past the rest of the field who were still on the first half.

At the end of Shelley Bay Road, we hit the half-marathon turn-around point (the half-marathon run was also an out-and-back route), and a lot of human traffic as a result – the road had been ours so far, and we now had to share it with a much bigger field of runners. Going into the race, I’d thought that the half marathon entrants would be running at a much faster pace (and indeed, the winner of that race did it in 1:08:02 so some of them definitely were!) but further back in the field, they were running slower than I was and I overtook a lot of runners with green race numbers (marathon runners had black race numbers). Again I think that might have given me another subconscious boost, but the best boost of all was seeing the missus and the kids at that point, cheering me on!

During the course of the run, I had exchanged a few words with a fellow runner, who it turned out, was running his 13th marathon, and was running at my pace more or less. So, at this point I decided I’d keep him in view for the remainder of the race, and we overtook each other at a few points during the run back, with general words of encouragement, egging each other on.

At about 38k I was getting pretty tired, but hung on for another few kilometers and then gave it everything I had over the final 2k, really picking up the pace and finally sprinting for the line over the last few hundred meters. I was elated to see a sub-3:15 on the clock as I ran over the line, and have a feeling that most of Wellington heard me whooping with delight! The guy I was chasing probably had a good 30 seconds on me at that point, but he was nice enough to wait around for me at the finish line and exchange congratulations.

As for my splits over the course of the race, I’m not entirely sure: I had been trying to record these every 2 kilometers, but with the drinks station positioning and my missing the occasional marker, what I ended up with is below. They seem a bit suspicious to me in places, but they’re reasonably consistent I think.

time kilometer
9:25 1, 2
8:52 3, 4
8:49 5, 6
21:06 7, 8, 9, 10
5:48 11 (this seems too slow)
8:40 12, 13
17:52 14, 15, 16, 17
16:18 18, 19, 20 (also seems too slow)
10:14 21, 22
8:53 23, 24
8:45 25, 26
9:00 27, 28
9:17 29, 30
9:27 31, 32
17:56 33, 34, 35, 36
9:13 37, 38
8:36 39, 40
12:25 41, 42 (I stopped my watch as I crossed the line, but
found that while stretching afterwards, I had started the stopwatch again,
so this last split is definitely wrong

This was my 2nd marathon, but it went a lot better than the last time. I was following exactly the same training plan from Hal Higdon’s website, with the only major gaps being the two weeks at the beginning when I had a cold, and the week or so in the middle when we were moving the family down to New Zealand.

I kept a physical training log this time around, pinned to the fridge so I’d always see it, as well as a version on Google Wave for a bit of self-induced peer-pressure. The yellow entries were runs I completed and orange being ones I missed, and my aim was to make the page turn from white to yellow with as few orange bits as possible (inspired by unit-testing – I think I should have picked green & red highlighters :-) Towards the end of the program, I wasn’t too worried about missing runs during the taper.

Tim's Training Log

I did a few things differently this time around, and I think they helped me a great deal during the race:

  • The Ngaio Gorge and its hill! When we moved to Wellington, my running routes changed to include a lot of hills (Wellington isn’t a flat city) and despite my complaining about them and the really crappy weather we’ve been having here, in retrospect, I think the hill running during the 2nd half of my training was the key thing that made me faster and tougher over the distance.
  • I tried as much as possible to stick to a constant pace – not to panic when things were a little slow at the start of the race, but to just keep an eye on my time credit/debit at the kilometer markers and make gentle adjustments to my pace.
  • I ate jelly babies fairly constantly during the race, as opposed to just when I felt I needed them (or deserved them!) Last time around I alternated between fig rolls and jelly babies, but I found the jelly babies on their own much easier to eat on the run.
  • I only drank water from the drinks stations during the race, and avoided the Powerade that was supplied, with the only exception being the 250ml of fresh orange juice (with a fair bit of salt added) that I was carrying, which I drank at about 37k. If I do another marathon, I might try to find out what energy drink the race provides, and add that to my training to see if it makes a difference.
  • I ran farther during training: I know this is up for debate, but felt that during the last marathon, the 20 mile (32k) long runs I was doing didn’t prepare me for those final 10k. This time, my longest training run was 37k, with an additional 4.1k of a gentle jog/walk back up the Ngaio Gorge – practically a marathon during training, and while I only did that once, I do think it helped.

Today, I feel a lot better than I remember the day after my first marathon – stairs today aren’t as scary as they were last October, and my John Wayne impression isn’t nearly as good as it was last year. That said, there’s an element of this in the way I’m moving about today (except the bit with the nipples, elastoplast FTW :-).

Will I do another marathon? Yes, probably – though I somehow doubt I’ll be able to make as much of an improvement in my PB next time around, we’ll see how things go. For now, recovery beckons.

pkg(5) SMF manifest dependency support

I’ve finally pushed some code that I’ve ended up working on in two different hemispheres!

With the putback of:

changeset:   1933:5193ac03ad9f
tag:         tip
user:        Tim Foster 
date:        Tue Jun 08 12:17:06 2010 +1200
	15305 need to generate dependency information from SMF manifests
	15306 need an attribute to declare the SMF services a package delivers
	15722 pkgdepend doesn't always remove internal deps when variants taken into account

pkgdepend(1) will now use SMF manifests as a source for dependency information.

This means, that if you as a package author are including SMF manifests in your package, the package publishing system will look in those manifests for any service descriptions that declare other FMRIs as “require_all” dependencies for your service, and will then ensure that the packages delivering those services are marked as dependencies for the package you’re publishing.

Obviously to do this work, pkgdepend needs to be run on a build machine that includes all SMF FMRIs that your package needs to function. That said, having your publishing system figure out dependencies for you, means there’s one less thing for you as a publisher to worry about, and certainly will make life easier for your users.

An interesting side-effect of this work, is that now you’ll be able to search the package database for SMF manifest FMRIs. For example, if you’re wondering who delivers svc:/network/iptun:default, you can do a simple search for it –

$ svcs iptun
STATE          STIME    FMRI                                
online         May_27   svc:/network/iptun:default
$ pkg search -l ':::svc\:/network/iptun\:default'
INDEX                ACTION VALUE                      PACKAGE
opensolaris.smf.fmri set    svc:/network/iptun:default pkg:/SUNWcs@0.5.11-0.142

(though do remember to escape the colons in the SMF FMRI field. Wildcards also work, so for kicks, you might want to try pkg search -l ':::svc\:* :-)


all change

Sunset, New Zealand

We’ve just gone through LEC (legal entity combination) and Sun Ireland is no more.

I’m not sure how I feel about Oracle yet, but I’m going to stay positive and give them a chance to show me that it’s a great place to work. Certainly getting paid to work on Solaris continues to be wonderful but I always felt there was more to working at Sun than just the code. I think we’ve lost something special with the demise of Sun, but I’m looking forward to the next few months to see how things turn out.

Now Oracle, it’s your turn.