Category Archives: open source

Gits

How come I’ve not yet commented on the announcement that Microsoft is buying Github?  OK, pure laziness.  Same reason so much else slips by unblogged.  You’ve got me bang to rights there.

Actually I have commented, albeit elsewhere and not in public.  The question posed to us was whether we had any reaction to it, and the answer was No.  Or at the very least, not yet.  A change to the terms and conditions would call for a reaction.  A change to the user interface and APIs likewise, especially if it involved loss of functionality such as, for example, any tie-in to the new proprietor’s choice of tools.  But a change in ownership doesn’t in itself call for a reaction.

Of course, this is not no-change.  It is a change to the risk profile of using github.  In the past it was VC-backed, and their business was to build a business of real value in the market.  To do that, they had to develop a service of real value to its users (i.e. us), which they did over the years.  But an eventual buyout by some bigco was always on the cards, and in retrospect Microsoft was indeed a likely candidate.  With Microsoft the risk is that it could fall victim to a hostile or misguided corporate agenda.

Microsoft itself has assured us of its good intentions.  I believe those assurances are meant sincerely: the value of Github is its developer community, and they have nothing to gain by alienating us.  They know that a proportion of the userbase will abandon them in a knee-jerk reaction: I guess they factor that into their plans.  On the other hand, no matter how good their intentions, a company the size of Microsoft inevitably encompasses multiple views and Agendas, both good and bad, and internal politics.  I can’t quite dismiss the conspiracy theory that the intention of setting back the github community and a lot of important projects exists somewhere within MS!

On techie discussion fora (e.g. at El Reg), a lot of folks are taking a different view: MS will destroy github as we know it.  They cite MS acquisitions such as skype and linkedin, and others going further back.  Skype is indeed a troubling example, as they have abandoned so many platforms and users: a course of action that would certainly sound the death-knell for github.  But skype was always closed and proprietary, and it’s likely the whole thing was also thoroughly unmaintainable long before MS acquired it.  MS may have been facing an unenviable choice with no satisfactory options (abandoning the whole thing would also create unhappy users, though it would shorten the pain all round).

Taking the longer history, back in the 1980s I was reasonably happy with MS stuff.  Word seemed good at what it did.  MSVC had the huge virtue of decent documentation, in a world where the existence of TFM was a rare thing!  They first really p***ed me off around the turn of the decade, in part with Windows, but much more so when I found myself the victim of proprietary and closely-guarded software[1].  The zenith of their evilness came later in the ’90s with “Embrace and Extend”, the deliberate breaking of published standards, subversion of the ‘net, and unleashing the first great wave of malware on their own users.  Around that time they were not merely a company without innovation (they acquired new things by buying companies from Autoroute to Hotmail after others had proved an idea), they were actively smothering it.  Some think they were also behind the world’s most preposterous software company SCO’s attack on Linux, although they weren’t the only company linked to that by circumstantial evidence.  A track record that left them very short of goodwill or trust among developers.

But that was then.  Again from uncertain memory, the first indication I had of the winds of change was in 2006 when a senior MS man gave a presentation at ApacheCon in Dublin.  This was someone seeking to build bridges and retrieve something from the ashes of its reputation.  Open Source was now on the agenda, and MS – or at least some within it – genuinely wanted to be our friends.  Signals since then have been somewhat mixed, but it seems clear at least that MS is no longer the deeply Evil Empire of twenty years ago.  Indeed, I’m sure that if it had been, such great people as my Apache colleagues Gianugo and Ross would never have joined them.

From that seed (one hopes) was born the company that is now buying Github.  This will be a real acid test for their relationship with open source.  I don’t think they want to fail this one!

[1] As I recollect it, an upgrade left me with some important Word documents that simply couldn’t be loaded, and even transferring to another machine with the old version was no help.  I couldn’t even do what I’d do today: google for any discussion of similar problems, or for relevant tools.

Mac vs Open Source

I develop software.

The kind of software I work on rarely concerns itself with details of the platforms it runs on, and is therefore inherently platform-neutral.  Of course complete cross-platform compatibility is elusive, but one does one’s best to adhere to widely-supported standards, libraries known to be cross-platform, etc.  And if something non-standard is unavoidable, try to package it so that switching it out will be clean and straightforward as and when someone has the need.

So it’s with some concern that I see the Mac platform apparently moving to distance itself from the open source world I inhabit.  I’ve got used to the idea that I sometimes have to use clang instead of gcc, and that that gives rise to annoying gotchas when autoconf stuff picks up gcc/g++ in spite of the standard names cc, c++ et al all being the clang versions!  Still, I guess it’s not the platform’s fault if
CC=cc CXX=c++ ./configure –options
behaves inconsistently.

Now it’s OpenSSL that’s been giving me grief.  Working with it on Mac for the first time, I see all the OpenSSL APIs I’m using appear to be deprecated.  Huh?  Googling finds that the whole of OpenSSL is deprecated on Mac.  Thou shalt use CC_crypto(3cc) instead!  Damn!!

OK, what’s CC_crypto?  Given that lots of software I work on uses OpenSSL, it’s only going to be of interest if it emulates OpenSSL (well, if for example it was an OpenSSL fork then that would be a reasonable expectation).  There’s a CC_crypto manpage, and google finds similar information at Apple’s developer site, but therein lies nothing more enlightening than cryptic hints:

To use the digest functions with existing code which uses the corresponding openssl functions, #define the symbol COMMON_DIGEST_FOR_OPENSSL in your client code (BEFORE including <CommonCrypto/CommonDigest.h>).

and

The interfaces to the encryption and HMAC algorithms have a calling interface that is different from that provided by OpenSSL.

Well, if that means it’s mostly OpenSSL-dropin-compatible, why not say so?  Even googling “CC_crypto openssl emulation” doesn’t turn up anything that looks promising, so I haven’t found any relevant documentation.  And since the header files are different, it will at the very least require some preprocessor crap.  OK, ignore it, stick to OpenSSL, kill off the -Werror compiler option, and maybe revisit the issue at some later date.

Not good enough.  The build bombs out when something (not my code, and I’d rather not have to hack it) uses HMAC functions, whose signature on Mac is different to other platforms.  So openssl on Mac – specifically /usr/include/openssl/hmac.h – is nonstandard!  Grrr …  In fact it appears to be some bastardised hybrid: OpenSSL function names with CCHmac-like declarations.  Is this OpenSSL in fact a wrapper for CC_crypto?  If so, why is it all deprecated?  Or if not, who has mutilated the API?

Well OK, that’ll be what Homebrew was talking about when it flashed up some message about installing OpenSSL only under Cellar, and not as a standard/system-wide lib.  So I have another OpenSSL.  Perhaps more?  locate hmac.h finds a whole bunch of versions (ignoring duplicates and glib’s ghmac.h):

/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/openssl/hmac.h
/private/var/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/openssl/hmac.h
/usr/include/openssl/hmac.h
/usr/local/Cellar/openssl/1.0.2/include/openssl/hmac.h

Of those, only the Cellar version is compatible with the canonical OpenSSL.  A –with-openssl configure option fixes my immediate problem, but throws up a bunch of questions:

  • Why have I had to jump through these hoops?
  • Where would I start if I want to use CC_crypto as advised in existing OpenSSL-using code?
  • What do I need to keep up-to-date on my system?  Presumably standard apps use the version in /usr , but is anything keeping that updated if homebrew isn’t touching it?

Dammit, looks like this Mac may be vulnerable!  Everything in /usr/include/openssl is dated 2011 (when the macbook was new).  The libssl in /usr/lib is dated September 2014 – which suggests it has been updated by some package manager.  But it identifies itself as libssl.0.9.8, which is not exactly current.  Maybe it’s a Good Thing the macbook’s wifi died, so it no longer travels with me outside the house.

WTF is Apple doing to us?

tengine

How well can open source transcend cultural and language barriers?

A few days ago I posted to the ironbee-devel list about the experimental nginx module for Ironbee.  It cannot be incorporated into Ironbee’s normal build/test processes in the manner of the Apache HTTPD and Traffic Server modules, because nginx doesn’t support loadable modules, so the libraries have to be installed so they can be linked when the nginx module is built.  This, along with a much more limited API, is presumably one of the design decisions the nginx team made when they focussed firmly on performance over extensibility.

In response to my post, someone drew my attention to an nginx fork called tengine.  The key point of tengine is that it addresses precisely the issue of loadable modules.  And not just that: it supports input filters, opening up the possibility of overcoming another shortcoming of the nginx module – the need to read and buffer an entire request body before scanning it.  Interesting.

I’ve now downloaded tengine, and tried building the nginx-ironbee module for it.  It appears to be fully API-compatible, and the only source change needed arose from their only having forked nginx 1.2.6 (the stable version), whereas I had developed the ironbee module using nginx 1.3.x.  All I need to add is a preprocessor directive to detect nginx version and work around the missing API, and the two are (or appear to be) fully interchangeable (well, until I take advantage of input filtering to improve it further).  This is seriously useful!

Tengine has been a collaborative open source effort for two years now (that’s almost as long as TrafficServer), yet this is the first I’d heard of it!  Perhaps one reason for that is that Tengine is made-in-China.  Just as TrafficServer originated from a single major site (Yahoo) before being open-sourced, so Tengine originates with Taobao and a Chinese developer community.  They have English-language resources including a decent-enough website.  But as a developer I want the mailinglist: there is an English-language list, but just looking at archive sizes tells me all the traffic takes place on the corresponding chinese-language list.

How much of a barrier is language?  I’ve written about that before, and now it’s my turn to find myself the wrong side of a language barrier.  Actually that applies to nginx too: the Russian-born web server has a core community whose language I don’t speak.  Developing the  nginx-ironbee module gave me an opportunity to test a barrier from the outside, and I’m happy to report I got some helpful responses and productive technical discussion on nginx’s English-language developer list.  A welcoming community and no language barrier to what I was doing.

Like other major open source projects, nginx has achieved a critical mass of interest that makes it not merely possible but inevitable that it crosses language barriers.  Not all nginx’s Russian core team participate in English-language lists (nor should they!), but all it takes is one or two insiders with fluent English as points of contact to bridge the divide.  I’ve no idea if I’ll get a good experience on tengine’s english-language list, but I expect I’ll find out now that I’ve heard of tengine and find it meets a need.

Corollary: there is still a language barrier.  Of course!  With Apache I started out developing applications (some of them modules) before making the transition to the core developer team.  With nginx or tengine I know I can’t make that transition – at least not fully.  And because I know that, I’m unlikely to let my work take me in that direction.  The same kind of consideration may or may not have led the tengine team to fork rather try and work directly with nginx.

Source and non-source repos

Some people engage in Holy Wars over what source control system to use.  For my part I really can’t get too worked up over a choice of tools, but I am concerned about another question.  What files do you keep in a source control repository?

I’d like to say source files.  Program source files, inputs for your choice of build system, legal stuff like licenses and acknowledgements, matters of record, documentation.  The key point is, files that are rightfully under the direct control of project members.  Not files that are generated by software, or managed by third-parties.

In practice, this principle is all-too-often lost.  One example is Apache HTTPD, whose source repos contain extensive HTML documentation that is not written by developers but generated from XML source.  There’s a clue in the headers of each of these files:

<!--
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
              This file is generated from xml source: DO NOT EDIT
        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      -->

So these files are not source, and should really be generated in the build (or made a configuration option) rather than kept under source control.  But apart from raising the overhead of using the repos, they’re harmless.

I’ve recently come upon an altogether more problematic case.  It manifested itself after I’d installed all the prerequisites for a configure to succeed, but found my build fell down in compiling something.  Scrolling up through reams of error messages, I find at the top:

#error This file was generated by a newer version of protoc which is
#error incompatible with your Protocol Buffer headers.  Please update
#error your headers.

OK, that’s simple enough: the version of google protobuf I installed with aptitude is too old.  Go to google and download the latest (cursing google for failing to sign it).  And hack protobuf.m4 to detect this error from configure rather than fall over in the build.

But hang on!  It’s not as simple as that.  This isn’t the usual dependency on a minimum version: it’s a requirement for an exact version of protobuf.  If I install a version that’s too new I get another error:

#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers.  Please
#error regenerate this file with a newer version of protoc.

Altogether more problematic.  Nightmare if I have more than one app each requiring different protobuf versions.  And this is a library I’m building: it could be linked with somesuch.  Ouch!

The clue is at the top of the file that generates the errors:

// Generated by the protocol buffer compiler.  DO NOT EDIT!
// source: [filename].proto

This C++ is not source, it’s an intermediate file generated by protoc, which is part of the protobuf package.  Its source is the .proto file, which is also there in the repo but not used for the build.  It follows that hacking protobuf.m4 to test the version was the wrong solution: instead the build should be updated to generate the intermediate files from the .proto source.

Ouch.

Open source, closed process

I just tried to report a bug to Ubuntu.  Nothing major, just a missing package dependency: aptitude installed libnids-dev for me without installing libpcap-dev.  My configure script then insists that nids.h was not found, whereas it is in fact clearly visible in /usr/include/nids.h.  Turns out the test program fails because nids.h #includes pcap.h, which is not installed.  Whoops!

OK, let’s do the Right Thing for a change: don’t just ignore it, report it.  How do I report a Ubuntu bug?  Aha, it’s at launchpad.net.  Search for nids: nope, none of the 16 bugs listed is this one.  OK, time to report a new bug.

This is where the problems go from straightforward to too difficult.  To report a bug, I need to log in to launchpad.  To log in, I need to create an account (it waffles on about OpenID, but it won’t accept my wordpress OpenID as a login).  And to create an account, I need to solve a captcha.  That is, one of those nasty eyesight tests.

I can’t do it.  This one is nastier than ever.

Cycle the thing a few times, they’re all as bad.  Try the audio version, but it’s silent (this is on a ubuntu machine).  Looks like I can’t report a bug! 😮

I look on freenode, find #ubuntu-devel.  Try asking there:

Just trying to report a bug (missing packaging dependency), but I can’t because I can’t even guess the launchpad captcha
any advice?
The bug is, libnids-dev requires pcap-dev as a dependency

After a few minutes silence, start to blog this.  But a few minutes more and someone replies:

first, I think you meant “libpcap-dev” instead of “pcap-dev”;
second, both these packages come unchanged from Debian, so it’s better to report this bug to bugs.debian.org

Ok, that looks like someone who knows what he’s talking about.  Try a bug report to Debian.  This fortunately turns out to be a much simpler process: their bug reporting site mentions a “reportbug” tool I can install with apt, and which appears to work nicely.

Ubuntu must be effectively in a bubble isolated from the big bad world!

Modules move home

When I first released some Apache modules, I was not yet part of the core development team.  I released modules based at my own site, for whomsoever was interested.  More recently, most new modules I’ve developed have gone straight into the core distribution from apache.org.  I’ve discussed the issue of in or out in this blog before, and this post could be considered a case in point.

One of those earlier modules, mod_proxy_html, turned out to be the solution to a big latent need, and rapidly became my most popular single module.  Since first release in 2003 it’s seen a number of significant improvements, including one for which I had direct sponsorship.  More recently, the advanced internationalisation support that had developed over the years was separated out into a new module mod_xml2enc, so that the same code could be shared with other markup-processing modules without having to duplicate it and maintain multiple copies.

These modules were released as open-source, but without the infrastructure for substantial collaborative development.  At first there wasn’t even a change control repository, though that was introduced fairly early.  There was no bugs database, no general developer forum.  Anyone wanting to participate had the choice of mailing me (which various people have done – sometimes with valuable contributions) or ignoring me and forking their own project (as in mod_proxy_content).

That’s imperfect.  In ideological terms it falls short of an open development model: someone wanting to make more than a minor contribution would have to work with me taking a lead role (hire me?  dream on) or fork.     A bug report or enhancement request would usually but not necessarily get my attention, and if it related to a scenario I couldn’t reproduce, that could present difficulties.  Whoops!  Bottom line: it’s a fine model for a one-man project and somewhat workable as it grows, but lacks infrastructure support for the community that drives major open projects like Apache’s successes.

Announcement

I can now announce that I’ve donated mod_xml2enc and mod_proxy_html to Apache.  They will feature as standard in webserver releases from the forthcoming 2.4.0.

This gives them a platform to grow and flourish, even if I take a back seat – as inevitably happens from time to time when interest has passed a certain point.  It also has some further implications for developers and users:

  1. Both modules are now relicensed under the Apache License.  They continue to exist under the GPL (or, in the case of mod_xml2enc, dual-licensed) at webthing, so third-party developers and distributors have a choice.
  2. However, there is no guarantee, nor even expectation, that the two versions will remain in step.  It is likely now that the version at apache will be the more up-to-date in future.  That’s where it’ll get the tender loving care of a broad developer community.  My own further work may happen in both places, but is more certain to happen at Apache than WebThing (unless in the unlikely event that a paying Client dictates otherwise).

Libxml2 Dependency

This may be of particular interest to packagers.  Most obviously it relieves them of the need to distribute mod_proxy_html as a separate package, but with one proviso.  If these modules are packaged in a standard Apache/HTTPD distribution then libxml2 becomes a dependency of that.

Not a big deal for anything-mainstream (though in the distant past it was considered a reason not to accept mod_proxy_html into the core product), but it invites another change.  If you switch from expat to libxml2 for APR’s parser (as described here) you can eliminate expat, and standardise on libxml2 for all markup parsing needs.  One might consider this a good move in any case, as libxml2 is not just more powerful, but also has the more active development community of the two.  The downside then is that you’ve introduced a bigger dependency for any APR users who have no use for HTTPD or libxml2.

That leaves the expat-based module mod_xmlns somewhat orphaned.  I’ll probably get around to switching that one to use libxml2: it’s pretty-much a drop-in replacement.  Or maybe I’ll drop it altogether in favour of Joachim Zobel’s mod_xml2, which was (I understand) originally inspired by mod_xmlns but offers an alternative and probably superior platform for XML applications.

OpenOffice at Apache?

Today’s buzz: talk of OpenOffice being donated to the Apache Software Foundation.

Wow!  That’s a Very Big Catch, isn’t it?  Perhaps the biggest since Hadoop?  Or???

Well, maybe.  As of now it’s a long way from a done deal, and it’s by no means clear that it will happen.  To become an Apache project, OpenOffice will have to be accepted into the incubator where it will have to demonstrate suitability before it can graduate to an Apache project.  Apache media guru Sally Khudairi has written about the incubation process here in anticipation of a wave of interest.

The first question is whether OpenOffice will enter the incubator in the first place.  Before the LibreOffice split there’s little doubt it would’ve been warmly welcomed, but now there’s a questionmark over why Oracle should prefer the ASF to TDF, and whether Apache folks want to make ourselves party to a legacy of that split.  But if this reaction from the LibreOffice folks represents a consensus then I for one will be happy to accept OpenOffice.

Intellectual Property should be straightforward (because Oracle owns all the rights, inherited from Sun), so the question then becomes how the community will fare.  How much room is there for both projects to thrive?  Who will give their loyalty to ASF in preference to TDF, or equal loyalty to both?  Could separate competing projects become a Good Thing and foster innovation, or will it just add duplication and confusion to no real purpose?

There is a likely driver for an Apache version: contributors who prefer the Apache License over the GPL.  That could drive interest particularly from companies like IBM who maintain their own derivative products.  Whether that will give rise to a thriving community, and perhaps a development focus distinct from that of LibreOffice, remains to be seen: that’s part of what incubation will tell us.

Anyway, if OpenOffices enters incubation at Apache, I’d expect that to be make or break for it.  If it thrives then we could see “Apache OpenOffice” at some future date.  If not, then it pretty clearly cedes the future to LibreOffice.  If only they could find a better name …

Shane’s blog links to lots more good reading.

Forking

An entertaining talk at FOSDEM was Michael Meeks, on the fork from OpenOffice to LibreOffice[1].   At the same time as delivering the now-popular message of community and open development, he was taking some quite partisan potshots at other FOSS models that unambiguously share those very values.  Hmmm … good entertainment, but perhaps unduly provocative.  Interestingly OpenOffice and LibreOffice both had stalls at FOSDEM, separated by only one independent exhibitor! 😮

From an outsider’s viewpoint[2], there was one thing I found reassuring.  Namely, the tensions that led to the split had existed during Sun’s time, before the Oracle takeover.  Thus whatever mistakes may have happened are not new.  I like to think Oracle is building on what Sun did right and drawing a line under what was wrong.  It would’ve been sad to hear that Oracle had damaged something Sun was doing right, and Meeks’s talk reassures me that hasn’t happened in this case.

The open-source-but-owned-and-controlled development model such as (most famously) that of MySQL can work, but seems to have fallen comprehensively out of favour with FOSS communities.  It’s at its best where third-parties are minor contributors, but is likely to lead to a fork if outside developers are taking a major interest.  And it’s never good to send mixed messages to the community: they’ll remember the big claims when you back-pedal.

[1] How is anyone supposed to promote a program the pronunciation of whose very name is a stumbling-block?  Shot in the foot there, methinks.  Is that the laughter of Redmond I hear?

[2] I’m a user of OpenOffice but have never contributed to its development, nor am I familiar with its community.

Furthering the interests of Free Software?

Or not.

The Free Software Foundation (FSF) has gone public with a statement on the Oracle vs Google litigation.  The FSF is of course free to do so, and since it’s also a campaigning organisation we should not be surprised when they do.  But does the statement itself stand up to scrutiny?

Before going any further, I should make it clear: this is a comment on the FSF’s position statement.  No matter where this appears aggregated, I don’t represent anyone or anything other than myself.  Any views I may have on the FSF itself, on Oracle or Google, on Java implementations, Android/Dalvik, on patents (software or otherwise) or on anyone/anything else, fall outside the scope of this posting.  Nor should this be taken as comment on the FSF beyond this single document: as it happens, I am in general terms an admirer of the FSF.

The introduction is clear enough:

As you likely heard on any number of news sites, Oracle has filed suit against Google, claiming that Android infringes some of its Java-related copyrights and patents. Too little information is available about the copyright infringement claim to say much about it yet; we expect we’ll learn more as the case proceeds. But nobody deserves to be the victim of software patent aggression, and Oracle is wrong to use its patents to attack Android.

That’s fair: the FSF’s position against software patents is rational and consistent.  Oracle vs Google is one of many patent cases currently in the courts throughout the rapidly-growing mobile devices space: some other household names that spring to mind include Apple, Nokia, HTC, and of course the victim of the biggest injustice, Blackberry-maker RIM.  But it’s also fair to say Oracle vs Google may have more far-reaching repercussions than the others, insofar as it may affect Free Software in the Android ecosystem.

The second paragraph is more problematic:

Though it took longer than we would’ve liked, Sun Microsystems ultimately did the right thing by the free software community when it released Java under the GPL in 2006. […]

That’s fair as far as it goes, but it’s becoming a partisan statement within FOSS when you implicitly dismiss the ongoing controversy over licensing a TCK.  The third paragraph goes on to say:

Now Oracle’s lawsuit threatens to undo all the good will that has been built up in the years since. Programmers will justifiably steer clear of Java when they stand to be sued if they use it in some way that Oracle doesn’t like. […]

Hang on!  How is that new?  The entire TCK issue is about field-of-use restrictions that are problematic for free software!  At the same time, let’s not forget that Java was hugely popular among Free Software developers even before 2006: these controversies matter only to an activist minority.

If the above is nitpicking, paragraph 4 is altogether more suspect.  Let’s quote it in full:

Unfortunately, Google didn’t seem particularly concerned about this problem until after the suit was filed. The company still has not taken any clear position or action against software patents. And they could have avoided all this by building Android on top of IcedTea, a GPL-covered Java implementation based on Sun’s original code, instead of an independent implementation under the Apache License. The GPL is designed to protect everyone’s freedom—from each individual user up to the largest corporations—and it could’ve provided a strong defense against Oracle’s attacks. It’s sad to see that Google apparently shunned those protections in order to make proprietary software development easier on Android.

Erm, this really is an attack on Apache!  How would IcedTea have helped here?  The only valid argument that it might have done is that rights were granted with Sun’s original code.  I don’t think it’s clear to anyone outside the Oracle and Google legal teams whether and to what extent such ‘grandfather’ rights might affect the litigation.  As far as licenses are concerned, the Apache License is a lot stronger on protection against patent litigation than the GPLv2 under which IcedTea is licensed.  Indeed, in separate news, Mozilla (another major player in Free Software) is updating its MPL license, and says of its update:

The highlight of this release is new patent language, modeled on Apache’s. We believe that this language should give better protection to MPL-using communities, make it possible for MPL-licensed projects to use Apache code, and be simpler to understand.

Well, Mozilla is coming from a startingpoint closer to the GPL than Apache.  It seems I’m not alone in supposing the Apache license offers the better patent protection, contrary to the FSF’s implication!

Finally the tone[1] of the FSF statement, as expressed for example in the final paragraph, makes me uneasy:

Oracle once claimed that it only sought software patents for defensive purposes. Now it is using them to proactively attack free software.

Hmmm, attacking Android/Dalvik is proactively attacking free software?  While it’s a supportable position it’s also (to say the least) ambiguous, and you haven’t made a case to convince a sceptic.  Or a judge.

[1] Not to mention the grammar, up on which some readers of this blog will undoubtedly pick.

Fear Novell. Or buy Novell.

Remember SCO?  The world’s saddest, most ludicrous software company?  Well, if not, Groklaw has a rich and colourful (not to mention opinionated) archive on the subject.

The ghost of SCO has long since joined that of Jarndyce & Jarndyce, the perpetual litigants.  But this week, an actual decision by a Utah jury: Novell owns the Unix copyrights.

Some believe SCO’s litigation was inherently doomed: there’s nothing to be had from Unix IP.  Yes, there’s value, but that’s long-since been opened to the world, and of course independently re-engineered elsewhere, most importantly in GNU/Linux.

Others take a different view: there’s gold beyond the dreams of avarice in that Unix IP.  SCO had a great idea; they just made a hash of executing it.  After all, in the real world, pirates have taken such major companies as Blackberry-maker RIM and even Microsoft to the cleaners over IP that is, by any standards, a drop in the ocean set against UNIX.

So when a hedge fund bids for Novell, I expect they’re in the latter camp.  They’re not an Oracle, a huge and powerful software company getting Sun, a crown-jewel complementary company on the cheap.  They’re a pure money-machine.  They have no business to fit Novell’s.  So it seems likely they want the crown jewels of Novell’s IP.

That was before the jury declared Novell owner of such an important part of the IP!  It must be worth more now, to a cash-rich wannabe-pirate.

Novell under current management has shown itself benign, and hero of the SCO story.  Under other management, all bets would be off.  The fact that they rejected one bid (or did they?) doesn’t necessarily mean they’ll always be able to do so – that’s up to the shareholders.

How much is it worth to lay that spectre to rest?  Are you a shareholder, and if not, why not?

  • Privacy