GreenAsh All-natural web development, poignant wit, and hippie ramblings since 2004 https://greenash.net.au/ en-au Mon, 29 Aug 2022 00:00:00 +0000 Ozersk: the city on the edge of plutonium https://greenash.net.au/thoughts/2022/08/ozersk-the-city-on-the-edge-of-plutonium/ Mon, 29 Aug 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/08/ozersk-the-city-on-the-edge-of-plutonium/ Ozersk (also spelled Ozyorsk) – originally known only by the codename Chelyabinsk-40 – is the site of the third-worst nuclear disaster in world history, as well as the birthplace of the Soviet Union's nuclear weapons arsenal. But I'll forgive you if you've never heard of it (I hadn't, until now). Unlike numbers one (Chernobyl) and two (Fukushima), the USSR managed to keep both the 1957 incident, and the place's very existence, secret for over 30 years.

Ozersk being so secret that few photos of it are available, is a good enough excuse, in my opinion, for illustrating this article with mildly amusing memes instead.
Ozersk being so secret that few photos of it are available, is a good enough excuse, in my opinion, for illustrating this article with mildly amusing memes instead.
Image source: Legends Revealed

Amazingly, to this day – more than three decades after the fall of communism – this city of about 100,000 residents (and its surrounds, including the Mayak nuclear facility, which is ground zero) remains a "closed city", with entry forbidden to all non-authorised personnel.

And, apart from being enclosed by barbed wire, it appears to also be enclosed in a time bubble, with the locals still routinely parroting the Soviet propaganda that labelled them "the nuclear shield and saviours of the world"; and with the Soviet-era pact still effectively in place that, in exchange for their loyalty, their silence, and a not-un-unhealthy dose of radiation, their basic needs (and some relative luxuries to boot) are taken care of for life.

Map of Ozersk and surrounds. Bonus: the most secret places and the most radioactive places are marked!
Map of Ozersk and surrounds. Bonus: the most secret places and the most radioactive places are marked!
Image source: Google Maps

So, as I said, there's very little information available on Ozersk, because to this day: it remains forbidden for virtually anyone to enter the area; it remains forbidden for anyone involved to divulge any information; and it remains forbidden to take photos / videos of it, or to access any documents relating to it. For over three decades, it had no name and was marked on no maps; and honestly, it seems like that might as well still be the case.

Oh yeah, Russia's got one of those too.
Oh yeah, Russia's got one of those too.
Image source: imgflip

Frankly, even had the 1957 explosion not occurred, the area would still be horrendously contaminated. Both before and after that incident, they were dumping enormous quantities of unadulterated radioactive waste directly into the water bodies in the vicinity, and indeed, they continue to do so, to this day. It's astounding that anyone still lives there – especially since, ostensibly, you know, Russians are able to live where they choose these days.

I love the smell of strontium-90 in the morning.
I love the smell of strontium-90 in the morning.
Image source: imgflip

As far as I can gather, from the available sources, most of the present-day residents of Ozersk are the descendants of those who were originally forced to go and live there in Stalin's time. And, apparently, most people who have been born and raised there since the fall of the USSR, choose to stay, due to: family ties; the government continuing to provide for their wellbeing; ongoing patriotic pride in fulfilling their duty; fear of the big bad world outside; and a belief (foolhardy or not) that the health risks are manageable. It's certainly possible that there are more sinister reasons why most people stay; but then again, in my opinion, it's not implausible that no outright threats or prohibitions are needed, in order to maintain the status quo.

Except Boris, every second Tuesday, when he does his vodka run.
Except Boris, every second Tuesday, when he does his vodka run.
Image source: Wonkapedia

Only one insider appears to have denounced the whole spectacle in recent history: lifelong resident Nadezhda Kutepova, who gave an in-depth interview with Western media several years ago. Kutepova fled Ozersk, and Russia, after threats were made against her, due to her campaigning to expose the truth about the prevalence of radiation sickness in her home town.

And only one outsider appears to have ever gotten in covertly, lived, and told the tale: Samira Goetschel, who produced City 40, the world's only documentary about life in Ozersk (the film features interview footage with Kutepova, along with several other Ozersk locals). Honestly, life inside North Korea has been covered more comprehensively than this.

Surely you didn't think I'd get through a whole meme-replete article on nuclear disaster without a Simpsons reference?!
Surely you didn't think I'd get through a whole meme-replete article on nuclear disaster without a Simpsons reference?!
Image source: imgflip

Considering how things are going in Putin's Russia, I don't imagine anything will be changing in Ozersk for a long time yet. Looks like business as usual – utterly trash the environment, manufacture dodgy nuclear stuff, maintain total secrecy, brainwash the locals, cause sickness and death – is set to continue indefinitely.

You can find much more in-depth information, in many of the articles and videos that I've linked to. Anyway, in a nutshell, Ozersk: you've never been there, and you'll never be able to go there, even if you wanted to. Which you don't. Now, please forget everything you've just read. This article will self-destruct in five seconds.

]]>
GDPR-compliant Google reCAPTCHA https://greenash.net.au/thoughts/2022/08/gdpr-compliant-google-recaptcha/ Sun, 28 Aug 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/08/gdpr-compliant-google-recaptcha/ Per the EU's GDPR and ePrivacy Directive, you must ask visitors to a website for their consent before setting any cookies, and/or before collecting any user tracking data. And because the GDPR applies to all EU citizens (who are residing within the EU), regardless of where in the world a website or its owner is based, in order to fully comply, in practice you should seek consent for all visitors to all websites globally.

In order to be GDPR-compliant, and in order to just be a good netizen, I made sure, when building GreenAsh v5 earlier this year, to not use services that set cookies at all, wherever possible. In previous iterations of GreenAsh, I used Google Analytics, which (like basically all Google services) is a notorious GDPR offender; this time around, I instead used Cloudflare Web Analytics, which is a good enough replacement for my modest needs, and which ticks all the privacy boxes.

However, on pages with forms at least, I still need Google reCAPTCHA. I'd like to instead use the privacy-conscious hCaptcha, but Netlify Forms only supports reCAPTCHA, so I'm stuck with it for now. Here's how I seek the user's consent before loading reCAPTCHA.

ready(() => {
  const submitButton = document.getElementById('submit-after-recaptcha');

  if (submitButton == null) {
    return;
  }

  window.originalSubmitFormButtonText = submitButton.textContent;
  submitButton.textContent = 'Prepare to ' + window.originalSubmitFormButtonText;

  submitButton.addEventListener("click", e => {
    if (submitButton.textContent === window.originalSubmitFormButtonText) {
      return;
    }

    const agreeToCookiesMessage =
      'This will load Google reCAPTCHA, which will set cookies. Sadly, you will ' +
      'not be able to submit this form unless you agree. GDPR, not to mention ' +
      'basic human decency, dictates that you have a choice and a right to protect ' +
      'your privacy from the corporate overlords. Do you agree?';

    if (window.confirm(agreeToCookiesMessage)) {
      const recaptchaScript = document.createElement('script');
      recaptchaScript.setAttribute(
        'src',
        'https://www.google.com/recaptcha/api.js?onload=recaptchaOnloadCallback' +
        '&render=explicit');
      recaptchaScript.setAttribute('async', '');
      recaptchaScript.setAttribute('defer', '');
      document.head.appendChild(recaptchaScript);
    }

    e.preventDefault();
  });
});

(View on GitHub)

I load this JS on every page, thus putting it on the lookout for forms that require reCAPTCHA (in my case, that's comment forms and the contact form). It changes the form's submit button text from, for example, "Send", to instead be "Prepare to Send" (as a hint to the user that clicking the button won't actually submit the form, there will be further action required before that happens).

It hijacks the button's click event, such that if the user hasn't yet provided consent, it shows a prompt. When consent is given, the Google reCAPTCHA JS is added to the DOM, and reCAPTCHA is told to call recaptchaOnloadCallback when it's done loading. If the user has already provided consent, then the button's default click behaviour of triggering form submission is allowed.

{%- if params.recaptchaKey %}
<div id="recaptcha-wrapper"></div>
<script type="text/javascript">
window.recaptchaOnloadCallback = () => {
  document.getElementById('submit-after-recaptcha').textContent =
    window.originalSubmitFormButtonText;
  window.grecaptcha.render(
    'recaptcha-wrapper', {'sitekey': '{{ params.recaptchaKey }}'}
  );
};
</script>
{%- endif %}

(View on GitHub)

I embed this HTML inside every form that requires reCAPTCHA. It defines the wrapper element into which the reCAPTCHA is injected. And it defines recaptchaOnloadCallback, which changes the submit button text back to what it originally was (e.g. changes it from "Prepare to Send" back to "Send"), and which actually renders the reCAPTCHA widget.

<!-- ... -->

  <form other-attributes-here data-netlify-recaptcha>
    <!-- ... -->

    {% include 'components/recaptcha_loader.njk' %}
    <p>
      <button type="submit" id="submit-after-recaptcha">Send</button>
    </p>
  </form>

<!-- ... -->

(View on GitHub)

This is what my GDPR-compliant, reCAPTCHA-enabled, Netlify-powered contact form looks like. The data-netlify-recaptcha attribute tells Netlify to require a successful reCAPTCHA challenge in order to accept a submission from this form.

The prompt before the reCAPTCHA in action
The prompt before the reCAPTCHA in action

That's all there is to it! Not rocket science, but I just thought I'd share this with the world, because despite there being a gazillion posts on the interwebz advising that you "ask for consent before setting cookies", there seem to be surprisingly few step-by-step instructions explaining how to actually do that. And the standard advice appears to be to use a third-party script / plugin that implements an "accept cookies" popup for you, even though it's really easy to implement it yourself.

]]>
Introducing: Instant-runoff voting simulator https://greenash.net.au/thoughts/2022/05/introducing-instant-runoff-voting-simulator/ Tue, 17 May 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/05/introducing-instant-runoff-voting-simulator/ I built a simulator showing how instant-runoff voting (called preferential voting in Australia) works step-by-step. Try it now.

The simulator in action
The simulator in action

I hope that, by being an interactive, animated, round-by-round visualisation of the ballot distribution process, this simulation gives you a deeper understanding of how instant-runoff voting works.

The rules coded into the simulator, are those used for the House of Representatives in Australian federal elections, as specified in the Electoral Act 1918 (Cth) s274.

There are other tools around that do basically the same thing as this simulator. Kudos to the authors of those tools. However, they only output a text log or a text-based table, they don't provide any visualisation or animation of the vote-counting process. And they spit out the results for all rounds all at once, they don't show (quite as clearly) how the results evolve from one round to the next.

Source code is all up on GitHub. It's coded in vanilla JS, with the help of the lovely Papa Parse library for CSV handling. I made a nice flowchart version of the code too.

See an interactive version of this flowchart on code2flow
See an interactive version of this flowchart on code2flow

With a federal election coming up, here in Australia, in just a few days' time, this simulator means there's now one less excuse for any of my fellow citizens to not know how the voting system works. And, in this election more than ever, it's vital that you properly understand why every preference matters, and how you can make every preference count.

]]>
Scott Morrison is not keen on ABC interviews https://greenash.net.au/thoughts/2022/05/scott-morrison-is-not-keen-on-abc-interviews/ Mon, 09 May 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/05/scott-morrison-is-not-keen-on-abc-interviews/ Scott Morrison surprised the fine folks over at the national broadcaster recently, by turning down their invitation for a pre-election debate with Anthony Albanese, instead choosing to have all three of his televised debates take place on commercial channels.

I have also made the casual observation, over the last three years, that Morrison makes few appearances on Aunty in general, compared with the commercial alternatives, particularly Sky News (which I personally have never watched directly, and have no plans to, but I've seen plenty of clips of Morrison on Sky repeated on the ABC and elsewhere).

This led me to do some research, to find out: how often has Morrison taken part in ABC interviews, during his tenure so far as Prime Minister, compared with his predecessors? I compiled my findings, and this is what they show:

Morrison's ABC interview frequency compared to his forebears
Morrison's ABC interview frequency compared to his forebears

It's official: Morrison has, on average, taken part in fewer ABC TV and Radio interviews, than any other Prime Minister in recent Australian history.

I hope you find my humble dataset useful. Apart from illustrating Morrison's disdain for the ABC, there are also tonnes of other interesting analyses that could be performed on the data, and tonnes of other conclusions that could be drawn from it.

My findings are hardly surprising, considering Morrison's flagrant preference for sensationalism, spin, and the far-right fringe. I leave it as an exercise to the reader, to draw from my dataset what conclusions you will, regarding the fate of the already Coalition-scarred ABC, should Morrison win a second term.

Update, 18 May 2022: check it out, this article has been re-published on Independent Australia!

]]>
Is Australia supporting atrocities in West Papua? https://greenash.net.au/thoughts/2022/04/is-australia-supporting-atrocities-in-west-papua/ Sat, 30 Apr 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/04/is-australia-supporting-atrocities-in-west-papua/ A wee lil' fact check style analysis of The Juice Media's 2018 video Honest Government Ad: Visit West Papua. Not that I don't love TJM's videos (that would be un-Australien!). Just that a number of the claims in that particular piece took me by surprise.

Nobody (except Indonesia!) is disputing that an awful lot of bad stuff has happened in Indonesian Papua over the last half-century or so, and that the vast majority of the blood is on the Indonesian government's hands. However, is it true that the Australian government is not only silent on, but also complicit in, said bad stuff?

Indonesia's favourite copper mine!
Indonesia's favourite copper mine!
Image source: YouTube

Note: in keeping with what appears to be the standard in English-language media, for the rest of this article I'll be referring to the whole western half of the island of New Guinea – that is, to both of the present-day Indonesian provinces of Papua and West Papua – collectively as "West Papua".

Grasberg mine

Let's start with one of the video's least controversial claims – one that's about a simple objective fact, and one that has nothing to do with Australia:

[Grasberg mine] … the biggest copper and gold mine in the world

Close enough

I had never heard of the Grasberg mine before. Just like I had never heard much in general about West Papua before – even though it's only about 200km from (the near-uninhabited northern tip of) Australia. Which I guess is due to the scant media coverage afforded to what has become a forgotten region.

The Grasberg mine's main open pit
The Grasberg mine's main open pit
Image source: Wikimedia Commons

Anyway, Grasberg is indeed big (it's the biggest mine in Indonesia), it's massively polluting, and it's extremely lucrative (both for US-based Freeport-McMoRan and for the Indonesian government).

It's really, seriously contaminating: this is the Aikwa
It's really, seriously contaminating: this is the Aikwa "river" downstream from Grasberg
Image source: The Guardian

Grasberg is actually the second-biggest gold mine in the world, based on total reserves, but it's a close second. And it's the fifth-biggest gold mine in the world, based on production. It's the tenth-biggest copper mine in the world. And Freeport-McMoRan is the third-biggest copper mining company in the world. Exact rankings vary by source and by year, but Grasberg ranks near the top consistently.

I declare this claim to be very close to the truth.

Accessory to abuses

we've [Australia] done everything we can to help our mates [Indonesian National Military] beat the living Fak-Fak out of those indigenous folks

Exaggerated

Woah, woah, woah! Whaaa…?

Yes, Australia has supplied the Indonesian military and the Indonesian police with training and equipment over many years. And yes, some of those trained personnel have gone on to commit human rights abuses in West Papua. And yes, there are calls for Australia to cease all support for the Indonesian military.

Unrest in West Papua
Unrest in West Papua
Image source: International Academics for West Papua

But. Are a significant number of Australian-trained Indonesian government personnel deployed in West Papua, compared with elsewhere in the vastness of Indonesia? We don't know (although it seems unlikely). Does Australia train Indonesian personnel in a manner that encourages violence towards civilians? No idea (but I should hope not). And does Australia have any control over what the Indonesian government does with the resources provided to it? Not really.

I agree that, considering the Indonesian military's track record of human rights abuses, it would probably be a good idea for Australia to stop resourcing it. The risk of Australia indirectly facilitating human rights abuses, in my opinion, outweighs the diplomatic and geopolitical benefits of neighbourly cooperation.

Nevertheless: Australia (as far as we know) has no boots on the ground in West Papua; (I have to reluctantly say that) Australia is not responsible for how the Indonesian military utilises the training and equipment that it has received; and there's insufficient evidence to link Australia's support of the Indonesian military to date, with goings-on in West Papua.

I declare this claim to be exaggerated.

Corporate plunder

so that our other mates [Rio Tinto, LG, BP, Freeport-McMoRan] can come in and start makin' the ching ching

Close enough

At the time that the video was made (2018), Rio Tinto owned a significant stake in the Grasberg mine, and it most certainly was "makin' the ching ching" from that stake. Although shortly after that, Rio Tinto sold all of its right to 40% of the mine's production, and is now completely divested of its interest in the enterprise. Rio Tinto is a British-Australian company, and is most definitely one of the Australian government's mates.

Freeport-McMoRan has, of course, been Grasberg's principal owner and operator for most of the mine's history, as well as the principal entity that has been raking in on the mine's insane profits. The company has some business ventures in Australia, although its ties with the Australian economy, and therefore with the Australian government, appear to be quite modest.

BP is the main owner of the Tangguh gas field, which is probably the second-largest and second-most-lucrative (and second-most-polluting!) industrial enterprise in West Papua. BP is of course a British company, but it has a significant presence in Australia. LG appears to also be involved in Tangguh. LG is a Korean company, and it has modest ties to the Australian economy.

The Tangguh LNG site in West Papua
The Tangguh LNG site in West Papua
Image source: KBR

So, all of these companies could be considered "mates" of the Australian government (some more so than others). And all of them are, or until recently were, "makin' the ching ching" in West Papua.

I declare this claim to be very close to the truth.

Stopping independence

Remember when two Papuans [Clemens Runaweri and Willem Zonggonau] tried to flee to the UN to expose this bulls***? We [Australia] prevented them from ever getting there, by detaining them on Manus Island

Checks out

Well, no, I don't remember it, because – apart from the fact that it happened long before I was born – it's an incident that has scarcely ever been covered by the media (much like the lack of media coverage of West Papua in general). Nevertheless, it did happen, and it is documented:

In May 1969, two young West Papuan leaders named Clemens Runaweri and Willem Zonggonau attempted to board a plane in Port Moresby for New York so that they could sound the alarm at UN headquarters. At the request of the Indonesian government, Australian authorities detained them on Manus Island when their plane stopped to refuel, ensuring that West Papuan voices were silenced.

Source: ABC Radio National

After being briefly detained, the two men lived the rest of their lives in exile in Papua New Guinea. Zonggonau died in Sydney in 2006, where he and Runaweri were visiting, still campaigning to free their homeland until the end. Runaweri died in Port Moresby in 2011.

Interestingly, it also turns out that the detaining of these men, along with several hundred other West Papuans, in the late 1960s, was the little-known beginning of Australia's now-infamous use of Manus Island as a place to let refugees rot indefinitely.

Celebrating half a century of locking up innocent people on remote Pacific islands
Celebrating half a century of locking up innocent people on remote Pacific islands
Image source: The New York Times

I declare this claim to be 100% correct.

Training hitmen

We [Australia] helped train [at the Indonesia-Australia Defence Alumni Association (IKAHAN)] and arm those heroes [the hitmen who assassinated the Papuans' leader Theys Eluay in 2001]

Exaggerated

Theys Eluay was indeed the chairman of the Papua Presidium Council – he was even described as "Papua's would-be first president" – and his death in 2001 was indeed widely attributed to the Indonesian military.

There's no conclusive evidence that the soldiers who were found guilty of Eluay's murder (who were part of Kopassus, the Indonesian special forces), received any training from Australia. However, Australia has provided training to Kopassus over many years, including during the 1980s and 1990s. This co-operation has continued into more recent times, during which claims have been made that Kopassus is responsible for ongoing human rights abuses in Papua.

Don't mess with Kopassus
Don't mess with Kopassus
Image source: ABC

I don't know why IKAHAN was mentioned together with the 2001 murder of Eluay, because it wasn't founded until 2011, so one couldn't possibly have anything to do with the other. It's possible that Eluay's killers received Australian-backed training elsewhere, but not there. Similarly, it's possible that training undertaken at IKAHAN has contributed to other shameful incidents in West Papua, but not that one. Mentioning IKAHAN does nothing except conflate the facts.

In any case, I repeat, (I have to reluctantly say that) Australia is not responsible for how the Indonesian military utilises the training and equipment that it has received; and there's insufficient evidence to link Australia's support of the Indonesian military to date, with goings-on in West Papua.

I declare this claim to be exaggerated.

Shipments from Cairns

which [Grasberg mine] is serviced by massive shipments from Cairns. Cairns! The Aussie town supplying West Papua's Death Star with all its operational needs

"Citations needed"

This claim really came at me out of left field. So much so, that it was the main impetus for me penning this article as a fact check. Can it be true? Is the laid-back tourist town of Cairns really the source of regular shipments of supplies, to the industrial hellhole that is Grasberg?

Cargo ships at the Port of Cairns
Cargo ships at the Port of Cairns
Image source: Bulk Handling Review

I honestly don't know how TJM got their hands on this bit of intel, because there's barely any mention of it in any media, mainstream or otherwise. Clearly, this was an arrangement that all involved parties made a concerted effort to keep under the radar for many years.

In any case, yes, it appears to be true. Or, at least, it was true at the time that the video was published, and it had been true for about 45 years, up until that time. Then, in 2019, the shipping, and Freeport-McMoRan's presence in town, apparently disappeared from Cairns, presumably replaced by alternative logistics based in Indonesia (and presumably due to the Indonesian government having negotiated to make itself the majority owner of Grasberg shortly before that).

It makes sense logistically. Cairns is one of the closest fully-equipped ports to Grasberg, only slightly further away than Darwin. Much closer than Jakarta or any of the other big ports in the Indonesian heartland. And I can imagine that, for various economic and political reasons, it may well have been easier to supply Grasberg primarily from Australia rather than from elsewhere within Indonesia.

I would consider that this claim fully checks out, if I could find more sources to corroborate it. However, there's virtually no word of it in any mainstream media; and the sources that do mention it are old and of uncertain reliability.

I declare this claim to be "citations needed".

Verdict

Australia is proud to continue its fine tradition of complicity in West Papua

Exaggerated

In conclusion, I declare that the video "Honest Government Ad: Visit West Papua", on the whole, checks out. In particular, its allegation of the Australian government being economically complicit in the large-scale corporate plunder and environmental devastation of West Papua – by way of it having significant ties with many of the multinational companies operating there – is spot-on.

But. Regarding the Australian government being militarily complicit in human rights abuses in West Papua, I consider that to be a stronger allegation than is warranted. Providing training and equipment to the Indonesian military, and then turning a blind eye to the Indonesian military's actions, is deplorable, to be sure. Australia being apathetic towards human rights abuses, would be a valid allegation.

To be "complicit", in my opinion, there would have to be Australian personnel on the ground, actively committing abuses alongside Indonesian personnel, or actively aiding and abetting such abuses.

Don't get me wrong, I most certainly am not defending Australia as the patron saint of West Papua, and I'm not absolving Australia of any and all responsibility towards human rights abuses in West Papua. I'm just saying that TJM got a bit carried away with the level of blame they apportioned to Australia on that front.

Protesting for West Papuan independence
Protesting for West Papuan independence
Image source: new mandala

Also, bear in mind that the only reason I'm "going soft" on Australia here, is due to a lack of evidence of Australia's direct involvement militarily in West Papua. It's quite possible that there is indeed a more direct involvement, but that all evidence of it has been suppressed, both by Indonesia and by Australia.

And hey, I'm trying to play devil's advocate in this here article, which means that I'm giving TJM more of a grilling than I otherwise would, were I to simply preach my unadulterated opinion.

I'd like to wholeheartedly thank TJM for producing this video (along with all their other videos). Despite me giving them a hard time here, the video is – as TJM themselves tongue-in-cheek say – "surprisingly honest!". It educated me immensely, and I hope it educates many more folks just as immensely, as to the lamentable goings-on, right on Australia's doorstep, about which we Aussies (not to mention the rest of the world) hear unacceptably little.

The Australian government is, at the very least, one of those responsible for maintaining the status quo in West Papua. And "business as usual" over there clearly includes a generous dollop of atrocities.

]]>
On the Proof of Humanity project https://greenash.net.au/thoughts/2022/04/on-the-proof-of-humanity-project/ Tue, 19 Apr 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/04/on-the-proof-of-humanity-project/ Proof of Humanity (PoH) is a project that I stumbled upon a few weeks ago. Its aim is to create a registry of every living human on the planet. So far, it's up to about 15,000 out of 7 billion.

Just for fun, I registered myself, so I'm now part of that tiny minority who, according to PoH, are verified humans! (Sorry, I guess the rest of you are just an illusion).

Actual bona fide humans
Actual bona fide humans

This is a brief musing on the PoH project: its background story, the people behind it, the technology powering it, the socio-economic philosophy behind it, the challenges it's facing, whether it stacks up, and what I think lies ahead.

The story

Most people think of Proof of Humanity in terms of its technology. That is, as a cryptocurrency thing, because it's all built on the Ethereum blockchain. So, yes, it's a crypto project. But, unlike almost every other crypto project, it has little to do with money (although some critics disagree), and everything to do with democracy.

The story begins in 2012, in Buenos Aires, Argentina (a part of the world that I know well and that's close to my heart), when an online voting platform called DemocracyOS was built, and when Pia Mancini founded a new political party called Partido de la Red, which promised it would vote in congress the way constituents told it to vote, law by law (similar to many pirate parties around the world). In 2014, Pia presented all this in a TED talk.

How to upgrade democracy for the Internet era
How to upgrade democracy for the Internet era
Image source: TED

DemocracyOS – which, by the way, is still alive and kicking – has nothing to do with crypto. It's just a simple voting app. Nor does it handle identity in any innovative way. The pilot in Argentina just relied on voters providing their official government-issued ID documents in order to vote. DemocracyOS is about enabling direct democracy, giving more people a voice, and fighting corruption.

In 2015, Pia Mancini and her partner Santiago Siri – along with Herb Stephens – founded Democracy Earth, which is when crypto entered the mix. The foundation's seminal paper "The Social Smart Contract" laid down (in exhaustive detail) the technical design for a new voting platform based on blockchain. The original plan was for the whole thing to be built on Bitcoin (Ethereum was brand-new at the time).

(Side note: the Democracy Earth paper was actually the thing that I stumbled across, while googling stuff related to direct democracy and liquid democracy. It was only that paper, that then led me to discover Proof of Humanity.)

To make the voting platform feasible, the paper argued, a decentralised "Proof of Identity" solution was needed – the design that the paper spells out for such a system, is clearly the "first draft" of what would later become Proof of Humanity. The paper also presents the spec for a universal basic income being paid to everyone on the platform, which is one of the key features of PoH today.

When Pia and Santiago welcomed their daughter Roma Siri into the world in 2015, they gave her the world's first ever "blockchain valid birth certificate" (using the Bitcoin blockchain). The declaration stated verbally in the video, and the display of the blockchain address in the visual recording, are almost exactly the same as the declaration and the public key that are present in the thousands of PoH registration videos to date.

Roma Siri: the world's first blockchain verified human
Roma Siri: the world's first blockchain verified human

The original plan was for Democracy Earth itself to build a blockchain-based voting platform. Which they did: it was called Sovereign, and it launched in 2016. Whereas DemocracyOS enables direct democracy, Sovereign takes things a step further, and enables liquid democracy.

Fast-forward to 2018: Kleros, a "decentralised court", was founded by Federico Ast (another Argentinian) and Clément Lesaege (a Frenchman), all built on Ethereum. Kleros has different aims to Democracy Earth, although it describes its mission as "access to justice and individual freedom". Unlike Democracy Earth, Kleros is not a foundation, although it's not a traditional for-profit company either.

From right: Santiago Siri, Federico Ast, Paula Berman, and Juan Llanos, at the
From right: Santiago Siri, Federico Ast, Paula Berman, and Juan Llanos, at the "first Proof of Humanity meetup" in Osaka, Japan, Oct 2019
Image source: Twitter

And fast-forward again to 2021. Proof of Humanity is launched, as an Ethereum Dapp ("decentralised app"). Officially, PoH is independent of any "real-life" people or organisations, and is purely governed by a DAO ("decentralised autonomous organisation").

In practice, the launch announcements are all published by Kleros; the organisations behind PoH are recognised as being Kleros and Democracy Earth; Clément Lesaege and Santiago Siri are credited as the architects of PoH; and the PoH DAO's inaugural board members are Santiago Siri, Herb Stephens, Clément Lesaege, and Federico Ast.

The main selling point that PoH has pitched so far, is that everyone who successfully registers receives a stream of UBI tokens, which will (apparently!) reduce world poverty and global inequality.

PoH participants are also able to vote on "HIPs" (Humanity Improvement Proposals) – i.e. proposed changes to the PoH smart contract, so basically, equivalent to voting on pull requests for the project's main codebase – I've already cast my first vote. Voting is powered by Snapshot, which appears to be the successor platform to Sovereign – but I'm waiting for someone to reply to my question about that.

PoH is still in its infancy. It doesn't even have a Wikipedia page yet. I wrote a draft Proof of Humanity Wikipedia page, but, despite a lengthy argument with the moderators, I wasn't able to get it published, because apparently there's still insufficient reliable independent coverage of the project. You're welcome to add more sources, to try and satisfy the pedantic gatekeepers over there.

Challenges

By far the biggest challenge to the growth and the success of Proof of Humanity right now, is the exorbitant transaction fees (known as "gas fees") charged by the Ethereum network. Considering that its audience is (ostensibly) every human on the planet, you'd think that registering with PoH would be free, or at least very cheap. Think again!

You have to pay a deposit, which is currently 0.125 ETH (approximately $400 USD), and which is refunded once your profile is successfully verified (and believe me or not, but I'm telling you from personal experience, they do refund it). That amount isn't trivial, even for a privileged first-worlder like myself.

But you also, in my personal experience, have to pay at least another 10% on top of that (i.e. 0.012 ETH, or $40 USD), in non-refundable gas fees, to cover the estimated processing power required to execute the PoH smart contract for your profile. Plus another 10% or so (could well be more, depending on your circumstances) if you need to exchange fiat money for Ethereum, and back again, in order to pay the deposit and to recover it later.

$9 change, please
$9 change, please
Image source: Obscure Train Movies

So, a $400 USD deposit, which you lose if your profile is challenged (and your appeal fails), and which takes at least a week to get refunded to you. Plus $80 USD in fees. Plus it's all denominated in a highly volatile cryptocurrency, whose value could plummet at any time. That's a pretty steep price tag, for participation in "a cool experiment" that has no real-world utility right now. Would I spend that money and effort again, to renew my PoH profile when it expires in two years' time? Unless it gains some real-world utility, probably not.

Also a major challenge, is the question of how to give the UBI tokens any real value. UBI can be traded on the open market (although the only exchange that actually allows it to be bought and sold right now is the Argentinian Ripio). When Proof of Humanity launched in early 2021, 1 UBI was valued at approximately $1 USD. Since then, its value has consistently declined, and 1 UBI is now valued at approximately $0.04 USD.

UBI is highly inflationary by design. Every verified PoH profile results in 1 UBI being minted per hour. So every time the number of verified PoH profiles doubles, the rate of UBI minting doubles. And currently there's zero demand for UBI, because there's nothing useful that you can do with it (including investing or speculating in it!). The PoH community is actively discussing possible solutions, but there's no silver bullet.

To top it all off, it's still not clear whether or not PoH will live up to its purported aim, which is to create a Sybil-proof list of humans. The hypothesis underpinning it all, is that a video submission – featuring visual facial movement, and verbal narration – is too high a bar for AI to pass. Deepfake technology, while still in its infancy, is improving rapidly. PoH is betting on Deepfake's capability plateauing below that bar. Time will tell how that gamble unfolds.

PoH is also placing enormous trust in each member of the community of already-verified humans, to vet new profile submissions as representing real, unique humans. It's a radical and unproven experiment. That level of trust has traditionally been reserved for nation-states and their bureaucracies. There are defences built-in to PoH, but time will tell how resilient they are.

Musings

I'm not a crypto guy. The ETH that I bought in order to pay the PoH deposit, is my first ever cryptocurrency holding (and, in keeping with conservative mainstream advice, it's a modest amount, not more than I can afford to lose).

My interest in PoH is from a democratic empowerment point of view, not from a crypto nor a financial point of view. The founders of PoH claim to have the same underlying interest at heart. If that's so, then I'm afraid I don't really understand why they built it all on top of Ethereum, which is, at the end of the day, a financial instrument.

Putting the Democracy in Earth
Putting the Democracy in Earth
Image source: The University of Queensland

Sure, the PoH design relies on hash proofs, and it requires some kind of blockchain. But they could have built a new blockchain specifically for PoH, one that's not a financial instrument, and one that's completely free as in beer. Instead, they've built a system that's coupled to the monetary value of, at the mercy of the monetary fees of, and vulnerable to the monetary fraud / scams of, the underlying financial network.

Regarding UBI: I think I'm a fan of it – I more-or-less wrote a universal basic income proposal myself, nine years ago. Not unlike what PoH has done, I too proposed that a UBI should be issued in a new currency that's not controlled by any sovereign nation-state (although what I had in mind was that it be governed by some UN-like body, not by something as radical as a DAO).

However, I can't say I particularly like the way that "self-sovereignty" and UBI have been conflated in PoH. I would have thought that the most important use case for PoH would be democratic voting, and I feel that the whole UBI thing is a massive distraction from that. What's more, many of the people who have registered with PoH to date, have done so hoping to make a quick buck with UBI, and is that really the group of people we want, as the pioneers of PoH? (Plus, I hate to break it to you, all you folks banking on UBI, but you're going to be disappointed.)

So, do I think PoH "stacks up"? Well, it's not a scam, although clearly all the project's founders are heavily invested in crypto, and do stand to gain from the success of anything crypto-related. Call me naïve, but I think the people behind PoH are pure of heart, and are genuinely trying to make the world a better place. I can't say I agree with all their theories, but I applaud their efforts.

Just needed to add broccoli
Just needed to add broccoli
Image source: Meme Creator

And do I think PoH will succeed? If it can overcome the critical challenges that it's currently facing, then it stands some chance of one day reaching a critical mass, and of proving itself at scale. Although I think it's much more likely that it will remain a niche enclave. I'd be pleasantly surprised if PoH reaches 5 million users, which would be about 0.1% of internet-connected humanity, still a far cry from World Domination™.

Say what you will about it, Proof of Humanity is a novel, fascinating idea. Regardless of whether it ultimately succeeds in its aims, and regardless of whether it even can or should do so, I think it's an experiment worth conducting.

]]>
Introducing: Hack Your Bradfield Vote https://greenash.net.au/thoughts/2022/04/introducing-hack-your-bradfield-vote/ Sun, 10 Apr 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/04/introducing-hack-your-bradfield-vote/ I built a tiny site, that I humbly hope makes a tiny difference in my home electorate of Bradfield, this 2022 federal election. Check out Hack Your Bradfield Vote.

How
How "Hack Your Bradfield Vote" looks on desktop and mobile

I'm not overly optimistic, here in what is one of the safest Liberal seats in Australia. But you never know, this may finally be the year when the winds of change rustle the verdant treescape of Sydney's leafy North Shore.

]]>
I don't need a VPS anymore https://greenash.net.au/thoughts/2022/03/i-dont-need-a-vps-anymore/ Tue, 22 Mar 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/03/i-dont-need-a-vps-anymore/ I've paid for either a "shared hosting" subscription, or a VPS subscription, for my own use, for the last two decades. Mainly for serving web traffic, but also for backups, for Git repos, and for other bits and pieces.

But, as of now, it's with bittersweet-ness that I declare, that that era in my life has come to a close. No more (personal) server that I wholly or partially manage. No more SSH'ing in. No more updating Linux kernel / packages. No more Apache / Nginx setup. No more MySQL / PostgreSQL administration. No more SSL certificates to renew. No more CPU / RAM usage to monitor.

No more defending against evil villains!
No more defending against evil villains!
Image source: Meme Generator

In its place, I've taken the plunge and fully embraced SaaS. In particular, I've converted most of my personal web sites, and most of the other web sites under my purview, to be statically generated, and to be hosted on Netlify. I've also moved various backups to S3 buckets, and I've moved various Git repos to GitHub.

And so, you may lament that I'm yet one more netizen who has Less Power™ and less control. Yet another lost soul, entrusting these important things to the corporate overlords. And you have a point. But the case against SaaS is one that's getting harder to justify with each passing year. My new setup is (almost entirely) free (as in beer). And it's highly available, and lightning-fast, and secure out-of-the-box. And sysadmin is now Somebody Else's Problem. And the amount of ownership and control that I retain, is good enough for me.

The number one thing that I loathed about managing my own VPS, was security. A fully-fledged Linux instance, exposed to the public Internet 24/7, is a big responsibility. There are plenty of attack vectors: SSH credentials compromise; inadequate firewall setup; HTTP or other DDoS'ing; web application-level vulnerabilities (SQL injection, XSS, CSRF, etc); and un-patched system-level vulnerabilities (Log4j, Heartbleed, Shellshock, etc). Unless you're an experienced full-time security specialist, and you're someone with time to spare (and I'm neither of those things), there's no way you'll ever be on top of all that.

I too have sinned.
I too have sinned.
Image source: TAG Cyber

With the new setup, I still have some responsibility for security, but only the level of responsibility that any layman has for any managed online service. That is, responsibility for my own credentials, by way of a secure password, which is (wherever possible) complimented with robust 2FA. And, for GitHub, keeping my private SSH key safe (same goes for AWS secret tokens for API access). That's it!

I was also never happy with the level of uptime guarantee or load handling offered by a VPS. If there was a physical hardware fault, or a data centre networking fault, my server and everything hosted on it could easily become unreachable (fortunately this seldom happened to me, thanks to the fine folks at BuyVM). Or if there was a sudden spike in traffic (malicious or not), my server's CPU / RAM could easily get maxxed out and become unresponsive. Even if all my sites had been static when they were VPS-hosted, these would still have been constant risks.

Don't worry. I've sent an email.
Don't worry. I've sent an email.
Image source: YouTube

With the new setup, both uptime and load have a much higher guarantee level, as my sites are now all being served by a CDN, either CloudFront or Netlify's CDN (which is similar enough to CloudFront). Pretty much the most highly available, highly resilient services on the planet. (I could have hooked up CloudFront, or another CDN, to my old VPS, but there would have been non-trivial work involved, particularly for dynamic content; whereas, for S3 / CloudFront, or for Netlify, the CDN Just Works™).

And then there's cost. I had quite a chunky 4GB RAM VPS for the last few years, which was costing me USD$15 / month. Admittedly, that was a beefier box than I really needed, although I had more intensive apps running on it, several years ago, than I've had running over the past year or two. And I felt that it was worth paying a bit extra, if it meant a generous buffer against sudden traffic spikes that might gobble up resources.

Ain't nothin' like a beefy server setup.
Ain't nothin' like a beefy server setup.
Image source: The Register

Whereas now, my main web site hosting service, Netlify, is 100% free! (There are numerous premium bells and whistles that Netlify offers, but I don't need them). And my main code hosting service, GitHub, is 100% free too. And AWS is currently costing me less than USD$1 / month (with most of that being S3 storage fees for my private photo collection, which I never stored on my old VPS, and for which I used to pay Flickr quite a bit more money than that anyway). So I consider the whole new setup to be virtually free.

Apart from the security burden, sysadmin is simply never something that I've enjoyed. I use Ubuntu exclusively as my desktop OS these days, and I've managed a number of different Linux server environments (of various flavours, most commonly Ubuntu) over the years, so I've picked up more than a thing or two when it comes to Linux sysadmin. However, I've learnt what I have, out of necessity, and purely as a means to an end. I'm a dev, and what I actually enjoy doing, and what I try to spend most of my time doing, is dev work. Hosting everything in SaaS land, rather than on a VPS, lets me focus on just that.

In terms of ownership, like I said, I feel that my new setup is good enough. In particular, even though the code and the content for my sites now has its source of truth in GitHub, it's Git, it's completely exportable and sync-able, I can pull those repos to my local machine and to at-home backups as often as I want. Same for my files for which the source of truth is now S3, also completely exportable and sync-able. And in terms of control, obviously Netlify / S3 / CloudFront don't give me as many knobs and levers as things like Nginx or gunicorn, but they give me everything that I actually need.

I think I own my new setup well enough.
I think I own my new setup well enough.
Image source: Wikimedia Commons

Purists would argue that I've never even done real self-hosting, that if you're serious about ownership and control, then you host on bare metal that's physically located in your home, and that there isn't much difference between VPS- and SaaS-based hosting anyway. And that's true: a VPS is running on hardware that belongs to some company, in a data centre that belongs to some company, only accessible to you via network infrastructure that belongs to many companies. So I was already a heretic, now I've slipped even deeper into the inferno. So shoot me.

20-30 years ago, deploying stuff online required your own physical servers. 10-20 years ago, deploying stuff online required at least your own virtual servers. It's 2022, and I'm here to tell you, that deploying stuff online purely using SaaS / IaaS offerings is an option, and it's often the quickest, the cheapest, and the best-quality option (although can't you only ever pick two of those? hahaha), and it quite possibly should be your go-to option.

]]>
Email-based comment moderation with Netlify Functions https://greenash.net.au/thoughts/2022/03/email-based-comment-moderation-with-netlify-functions/ Thu, 17 Mar 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/03/email-based-comment-moderation-with-netlify-functions/ The most noteworthy feature of the recently-launched GreenAsh v5, programming-wise, is its comment submission system. I enjoyed the luxury of the robust batteries-included comment engines of Drupal and Django, back in the day; but dynamic functionality like that isn't as straight-forward in the brave new world of SSG's. I promised that I'd provide a detailed run-down of what I built, so here goes.

Some of GreenAsh's oldest published comments, looking mighty fine in v5.
Some of GreenAsh's oldest published comments, looking mighty fine in v5.

In a nutshell, the way it works is as follows:

  1. The user submits their comment via a simple HTML form powered by Netlify Forms
  2. The submission gets saved to the Netlify Forms data store
  3. The submission-created event handler sends the site admin (me!) an email containing the submission data and a URL
  4. The site admin opens the URL, which displays an HTML form populated with the submission data
  5. After eyeballing the submission data, the site admin enters a secret token to authenticate
  6. The site admin clicks "Approve", which writes the new comment to a JSON file, pushes the code change to the site's repo via the GitHub Contents API, and deletes the submission from the data store via the Netlify Forms API (or the site admin clicks "Delete", in which case it just deletes the submission from the data store)
  7. Netlify rebuilds the site in response to a GitHub code change as usual, thus publishing the comment

The initial form submission is basically handled for me, by Netlify Forms. The bit where I had to write code only begins at the submission-created event handler. I could have POSTed form submissions directly to a serverless function, and that would have allowed me a lot more usage for free. Netlify Forms is a premium product, with a not-particularly-generous free tier of only 100 (non-spam) submissions per site per month. However, I'd rather use it, and live with its limits, because:

  • It has solid built-in spam protection, and defence against spam is something that was my problem for nearly the past 20 years, and I'd really really like for it to be Somebody Else's Problem from now on
  • It has its own data store of submissions, which I don't strictly need (because I'm emailing myself each submission), but which I consider really nice to have, if for any reason the email notifications don't reach me (and I also have many years of experience with unreliable email delivery), and which would be a pain (and totally not worth it) to build myself in a serverless way (would need something like DynamoDB, API Gateway, various lambda's, it would be a whole project in itself)
  • I can interact with that data store via a nice API
  • I can review spam submissions in the Netlify Forms UI (which is good, because I don't get notified of them, so otherwise I'd have no visibility over them)
  • Even if I bypassed Netlify Forms, I'd still have to send myself a customised email notification, which I do, using the SparkPost Transmissions API, which has a free tier limit of 500 emails per month anyway

So, the way the event handler works, is that all you have to do, in order to hook up a function, is to create a file in your repo with the correct magic name netlify/functions/submission-created.js (that's magic that isn't as well-documented as it could be, if you ask me, which is why I'm pointing it out here as explicitly as possible). You can see my full event handler code on GitHub. Here's the meat of it:

// Loosely based on:
// https://www.seancdavis.com/posts/netlify-function-sends-conditional-email/
const sendMail = async (
  sparkpostToken,
  fromEmail,
  toEmail,
  siteName,
  siteDomain,
  title,
  path,
  id,
  date,
  name,
  email,
  url,
  comment,
) => {
  const options = {
    hostname: SPARKPOST_API_HOSTNAME,
    port: HTTPS_PORT,
    path: SPARKPOST_TRANSMISSION_API_ENDPOINT,
    method: "POST",
    headers: {
      Authorization: sparkpostToken,
      "Content-Type": "application/json",
    }
  };

  const commentSafe = escapeHtml(comment);
  const moderateUrl = getModerateUrl(
    siteDomain, title, path, id, date, name, url, commentSafe
  );

  let data = {
    options: {
      open_tracking: false,
      click_tracking: false,
    },
    recipients: [
      {
        address: {
          email: toEmail,
        },
      },
    ],
    content: {
      from: {
        email: fromEmail,
      },
      subject: getNotifyMailSubject(siteName, title),
      text: getNotifyMailText(name, email, url, comment, moderateUrl),
    },
  };

  try {
    return await doRequest(options, JSON.stringify(data));
  } catch (e) {
    console.error(`SparkPost create transmission call failed: ${e}`);
    throw e;
  }
};

The way I'm crafting the notification email, is pretty similar to the way my comment notification emails worked before in Django. That is, the email includes the commenter's name and email, and the comment body, in readable plain text. And it includes a URL that you can follow, to go and moderate the comment. In Django, that was simply a URL to the relevant page in the admin. But this is a static site, it has no admin. So it's a URL to a form, and the URL includes all of the submission data, encoded into it as GET parameters.

How the comment notification email looks in GMail
How the comment notification email looks in GMail

Clicking the URL then displays an HTML form, which is generated by another serverless function, the code for which you can find here. That HTML form doesn't actually need to be generated by a function, it could itself be a static page (containing some client-side JS to populate the form fields from GET parameters), but it was just as easy to make it a function, and it effectively costs me no money either way, and I thought, meh, I'm in functions land anyway.

All the data in that form gets populated from what's encoded in the clicked-on URL, except for token, which I have to enter in manually. But, because it's a standard HTML password field, I can tell my browser to "remember password for this site", so it gets populated for me most of the time. And it's dead-simple HTML, so I was able to make it responsive with minimal effort, which is good, because it means I can moderate comments on my phone if I'm out and about.

The comment moderation form
The comment moderation form

Having this intermediary HTML form is necessary, because a clickable URL in an email can't POST directly (and I certainly don't want to actually write comments to the repo in a GET request). It's also good, because it means that the secret token has to be entered manually in the browser, which is more secure, and less mistake-prone, than the alternative, which would be sending the secret token in the notification email, and including it in the URL. And it gives me a slightly nicer UI (slightly nicer than email, that is) in which to eyeball the comment, and it gives me the opportunity to edit the comment before publishing it (which I sometimes do, usually just to fix formatting, not to censor or distort what people have to say!).

Next, we get to the business of actually approving or rejecting the comment. You can see my full comment action code on GitHub. Here's where the approval happens:

const approveComment = async (
  githubToken,
  githubUser,
  githubRepo,
  netlifyToken,
  id,
  path,
  title,
  date,
  name,
  url,
  comment,
) => {
  try {
    let existingSha;
    let existingJson;
    let existingComments;

    try {
      const existingFile = await getExistingCommentsFile(
        githubToken, githubUser, githubRepo, path
      );
      existingSha = existingFile.sha;
      existingJson = getExistingJson(existingFile);
      existingComments = getExistingComments(existingJson);
    } catch (e) {
      existingSha = null;
      existingJson = {};
      existingComments = [];
    }

    const newComments = getNewComments(existingComments, date, name, url, comment);
    const newJson = getNewJson(existingJson, newComments);

    await putNewCommentsFile(
      githubToken, githubUser, githubRepo, path, title, date, name, newJson, existingSha
    );

    await purgeComment(id, netlifyToken);

    return { statusCode: 200, body: "Comment approved" };
  }
  catch (e) {
    return { statusCode: 400, body: "Failed to approve comment" };
  }
};

I'm using Eleventy's template data files (i.e. posts/subdir/my-first-blog-post.11tydata.json style files) to store the comments, in simple JSON files alongside the thought content files themselves, in the repo. So the comment approval function has to append to the relevant JSON file if it already exists, otherwise it has to create the relevant JSON file from scratch. That's why the first thing the function does, is try to get the existing JSON file and its comments, and if none exists, then it sets the list of existing comments to an empty array.

The function appends the new comment to the existing comments array, it serializes the new array to JSON, and it writes the new JSON file to the repo. Both interactions with the repo – reading the existing comments file, and writing the new file – are done using the GitHub Contents API, as simple HTTP calls (the PUT call results in a new commit on the repo's default branch). This way, the function doesn't have to actually interact with Git, i.e. it doesn't have to clone the repo, read from the filesystem, perform a commit, or push the change (and, therefore, nor does it need an SSH key, it just needs a GitHub API key).

The newly-approved comment in the GitHub UI's commit details screen
The newly-approved comment in the GitHub UI's commit details screen

From that point on, just like for any other commit pushed to the repo's default branch, Netlify receives a webhook notification from GitHub, and that triggers a standard Netlify deploy, which builds the latest version of the site using Eleventy.

Netlify re-deploying the site with the new comment
Netlify re-deploying the site with the new comment

The only other thing that the comment approval function does, is the same thing (and the only thing) that the comment rejection function does, which is to delete the submission via the Netlify Forms API. This isn't strictly necessary: I could just let the comments sit in the Netlify Forms data store forever (and as far as I know, Netlify has no limit on how many submissions it will store indefinitely for free, only on how many submissions it will process per month for free).

But by deleting each comment from there after I've moderated it, the Netlify Forms data store becomes a nice "todo queue", should I ever need one to refer to (i.e. should my inbox not be a good enough such queue). And I figure that a comment really doesn't need to be stored anywhere else, once it's approved and committed in Git (and, conversely, it really doesn't need to be stored anywhere at all, once it's rejected).

The new comment can be seen in the Netlify UI before it gets approved or rejected
The new comment can be seen in the Netlify UI before it gets approved or rejected

The old Django-powered site was set up to immediately publish comments (i.e. no moderation) on thoughts that were less than one month old; and to publish comments after they'd been moderated, for thoughts that were up to one year old; and to close comment submission, for thoughts that were more than one year old.

Publishing comments immediately upon submission (or, at least, within a minute or two of submission, allowing for Eleventy build time / Netlify deploy time) would be possible in the new site, but personally I'm not comfortable with letting actual Git commits (as opposed to just database inserts) get triggered directly like that. So all comments will now be moderated. And, for now, I'm keeping comment submission open for all thoughts, old or new, and hopefully Netlify's spam protection will prove tougher than my old defences (the only reason why I'd closed comments for older thoughts, in the past, was due to a deluge of spam).

I should also note that the comment form on the new site has a (mandatory) "email" field, same as on the old site. However, on the old site, I was able to store the emails of commenters in the Django database indefinitely, but to not render them in the front-end, thus keeping them confidential. In the new site, I don't have that luxury, because if the emails are in Git, then (even if they're not rendered in the front-end) they're publicly visible on GitHub (unless I were to make the whole repo private, which I specifically don't want to do, I want the site itself to be open source!).

So, in the new site, emails of commenters are included in the notification email that gets sent to me (so that I can contact the commenter should I want to or need to), and they're stored (usually only temporarily) in the Netlify Forms data store, but they don't make it anywhere else. Rest assured, commenters, I respect your privacy, I will never publish your email address.

Commenting: because everyone's voice deserves to be heard!
Commenting: because everyone's voice deserves to be heard!
Image source: Illawarra Mercury

Well, there you have it, my answer to "what about comments" in the static serverless SaaS web of 2022. For your information, there's another, more official solution for powering comments with Netlify and Eleventy, with a great accompanying article.. And, full disclosure, I copied quite a few bits and pieces from that project. My main gripe with the approach taken there, is that it uses Slack, instead of email, for the notifications. It's not that I don't like Slack – I've been using it every day for work, across several jobs, for many years now (albeit not by choice) – but, call me old-fashioned if you will, I prefer good ol' email.

More credit where it's due: thanks to this article that shows how to push a comments JSON file directly to GitHub (which I also much prefer, compared to the official solution's approach of using the Netlify Forms data store as the source of truth, and querying it for approved comments during each site build); this one that shows how to send notification emails from Netlify Functions; and this one that shows how to connect a form to a submission-created.js function. I couldn't have built what I did, without standing on the shoulders of giants.

You've read this far, all about my whiz bang new comments system. Now, the least you can do is try it out, the form's directly below. :D

]]>
Introducing GreenAsh 5 https://greenash.net.au/thoughts/2022/03/introducing-greenash-5/ Mon, 14 Mar 2022 00:00:00 +0000 https://greenash.net.au/thoughts/2022/03/introducing-greenash-5/ After a solid run of twelve years, I've put GreenAsh v4 out to pasture, and I've launched v5 to fill its plush shoes.

Sporting a simple, readable, mobile-friendly design.
Sporting a simple, readable, mobile-friendly design.

GreenAsh v5 marks the culmination of my continuing mission, to convert over all of my personal sites, and all of the other sites that I still host slash maintain, to use a Static Site Generator (SSG). As with some other sites of mine, GreenAsh is now powered by Eleventy, and is now hosted on Netlify.

As was the case with v4, this new version isn't a complete redesign, it's a realign. First and foremost, the new design's aim is for the thought-reading experience to be a delightful one, with improved text legibility and better formatting of in-article elements. The new design is also (long overdue for GreenAsh!) fully responsive from the ground up, catering for mobile display just as much as desktop.

After nearly 18 years, this is the first ever version of GreenAsh to lack a database-powered back-end. 'Tis a bittersweet parting for me. The initial years of GreenAsh, powered by the One True™ LAMP Stack – originally, albeit briefly, using a home-grown PHP app, and then, for much longer, using Drupal – were (for me) exciting times that I will always remember fondly.

The past decade (and a bit) of the GreenAsh chronicles, powered by Django, has seen the site mature, both technology-wise and content-wise. In this, the latest chapter of The Life of GreenAsh, I hope not just to find some juniper bushes, but also to continue nurturing the site, particularly by penning thoughts of an ever higher calibre.

The most noteworthy feature that I've built in this new version, is a comment moderation and publishing system powered mainly by Netlify Functions. I'm quite proud of what I've cobbled together, and I'll be expounding upon it, in prose coming soon to a thought near you. Watch this space!

Some of the things that I had previously whinged about as being a real pain in Hugo, such as a tag cloud and a monthly / yearly archive, I've gotten built quite nicely here, using Eleventy, just as I had hoped I would. Some of the functionality that I had manually ported from Drupal to Django (i.e. from PHP to Python), back in the day, such as the autop filter, and the inline image filter, I have now ported from Django to Eleventy (i.e. from Python to Node.js).

As a side effect of the site now being hosted on Netlify, the site's source code is (for the first time) publicly available on GitHub, and even has an open-source license. So feel free to use it as you will.

All of the SSG-powered sites that I've built over the past year, have their media assets (mainly consisting of images) stored in S3 and served by CloudFront (and, in some cases, the site itself is also stored in S3 and is served by CloudFront, rather than being hosted on Netlify). GreenAsh v5 is no exception.

On account of the source code now being public, and of there no longer being any traditional back-end server, I've had to move some functionality out of GreenAsh, that I previously had bundled in to Django. In particular, I migrated my invoice data for freelance work – which had been defined as Django models, and stored in the sites's database, and exposed in the Django admin – to a simple Google Sheet, which, honestly (considering how little work I do on the side these days), will do, for the foreseeable future. And I migrated my résumé – which had been a password-protected Django view – to its own little password-protected S3 / CloudFront site.

The only big feature of v4 that's currently missing in v5, is site search. This is, of course, much easier to implement for a traditional back-end-powered site, than it is for an SSG-powered site. I previously used Whoosh with Django. Anyway, site search is only a nice-to-have feature, and this is only a small site that's easily browsable, and (in the meantime) folks can just use Google with the site: operator instead. And I hear it's not that hard to implement search for Eleventy these days, so maybe I'll whack that on to GreenAsh v5 sometime soon too.

I've been busy, SSG-ifying all my old sites, and GreenAsh is the lucky last. Now that GreenAsh v5 is live (and now that I've migrated various other non-web-facing things – mainly migrating backups of things to S3 buckets), that means I don't need a VPS anymore! I'll be writing a separate thought, sometime soon, about the pros and cons of still having a VPS in this day and age.

Hope y'all like the new décor.

]]>
The lost Armidale to Wallangarra railway https://greenash.net.au/thoughts/2021/11/the-lost-armidale-to-wallangarra-railway/ Mon, 15 Nov 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/11/the-lost-armidale-to-wallangarra-railway/ Running more-or-less alongside the most remote section of the New England Highway, through the Northern Tablelands region of NSW, can be found the remnants of a once-proud train line. The Great Northern Railway, as it was known in its heyday, provided the only railway service linking Sydney and Brisbane, between 1889 and 1930. Regular passenger services continued until 1972, and the line has been completely closed since 1988.

Metro map style illustration of the old Armidale to Wallangarra passenger service
Metro map style illustration of the old Armidale to Wallangarra passenger service
Thanks to: Metro Map Maker

Although I once drove through most of the Northern Tablelands, I wasn't aware of this railway, nor of its sad recent history, at the time. I just stumbled across it a few days ago, browsing maps online. I decided to pen this here wee thought, mainly because I was surprised at how scant information there is about the old line and its stations.

Great Northern Railway as shown in the 1933 official NSW government map
Great Northern Railway as shown in the 1933 official NSW government map
Image source: NSWrail.net

You may notice that some of the stops shown in the 1933 map, are missing from my metro map style illustration. I have omitted all of the stops that are listed as something other than "station" in this long list of facilities on the Main North Line. As far as I can tell, all of the stops listed as "unknown" or "loop", were at best very frugal platform sidings that barely qualified as stations, and their locations were never really populated towns (even going by the generous Aussie bush definition of "populated town", that is, "two people, three pubs").

All that remains of Bungulla, just south of Tenterfield
All that remains of Bungulla, just south of Tenterfield
Image source: NSWrail.net

Although some people haven't forgotten about it – particularly many of the locals – the railway is clearly disappearing from the collective consciousness, just as it's slowly but surely eroding and rotting away out there in the New England countryside.

Stonehenge station, just south of Glen Innes, has seen better days
Stonehenge station, just south of Glen Innes, has seen better days
Image source: NSWrail.net

Some of the stations along the old line were (apparently) once decent-sized towns, but it's not just the railway that's now long gone, it's the towns too! For example, Bolivia (the place that first caught my eye on the map, and that got me started researching all this – who would have imagined that there's a Bolivia in NSW?!), which legend has it was a bustling place at the turn of the 20th century, is nothing but a handful of derelict buildings now.

Bolivia ain't even a one-horse town no more
Bolivia ain't even a one-horse town no more
Image source: NSWrail.net

Other stations – and other towns, for that matter – along the old railway, appear to be faring better. In particular, Black Mountain station is being most admirably maintained by a local group, and Black Mountain village is also alive and well.

The main platform at Black Mountain station
The main platform at Black Mountain station
Image source: NSWrail.net

These days, on the NSW side, the Main North Line remains open up to Armidale, and a passenger train service continues to operate daily between Sydney and Armidale. On the Queensland side, the Southern line between Toowoomba and Wallangarra is officially still open to this day, and is maintained by Queensland Rail, however my understanding is that there's only a train actually on the tracks, all the way down to Wallangarra, once in a blue moon. On the Main line, a passenger service currently operates twice a week between Brisbane and Toowoomba (it's the Westlander service, which continues from Toowoomba all the way to Charleville).

The unique Wallangarra station, with its standard-guage NSW side, and its narrow-guage Qld side
The unique Wallangarra station, with its standard-guage NSW side, and its narrow-guage Qld side
Image source: Wikimedia Commons

The chances of the Armidale to Wallangarra railway ever re-opening are – to use the historically appropriate Aussie vernacular – Buckley's and none. The main idea that the local councils have been bandying about for the past few years, has been to convert the abandoned line into a rail trail for cycling. It looks like that plan is on the verge of going ahead, even though a number of local citizens are vehemently opposed to it. Personally, I don't think a rail trail is such a bad idea: the route will at least get more use, and will receive more maintenance, than it has for the past several decades; and it would bring a welcome trickle of tourists and adventurers to the region.

The Armidale to Wallangarra railway isn't completely lost nor forgotten. But it's a woeful echo of its long-gone glory days (it isn't even properly marked on Google Maps – although it's pretty well-marked on OpenStreetMap, and it's still quite visible on Google Maps satellite imagery). And, regretfully, it's one of countless many derelict train lines scattered across NSW: others include the Bombala line (which I've seen numerous times, running adjacent to the Monaro Highway, while driving down to Cooma from Sydney), the Nyngan to Bourke line, and the Murwillumbah line.

May this article, if nothing else, at least help to document what exactly the stations were on the old line, and how they're looking in this day and age. And, whether it's a rail trail or just an old relic by the time I get around to it, I'll have to head up there and see the old line for myself. I don't know exactly what future lies ahead for the Armidale to Wallangarra railway, but I sincerely hope that, both literally and figuratively, it doesn't simply fade into oblivion.

]]>
Japan's cherry blossom: as indisputable as climate change evidence gets https://greenash.net.au/thoughts/2021/07/japans-cherry-blossom-as-indisputable-as-climate-change-evidence-gets/ Sun, 25 Jul 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/07/japans-cherry-blossom-as-indisputable-as-climate-change-evidence-gets/ This year, Japan's earliest cherry blossom in 1,200 years made headlines around the world. And rightly so. Apart from being (as far as I can tell) a truly unparalleled feat of long-term record-keeping, it's also a uniquely strong piece of evidence in the case for man-made climate change.

I think this graph speaks for itself.
I think this graph speaks for itself.
Image source: BBC News

I just want to briefly dive in to the data set (and the academic research behind it), and to explain why, in my opinion, it's such a gem in the sea of modern-day climate science.

Can we trust this data?

Green-minded lefties (of whom I count myself one) would tend to trust this data, without feeling the need to dig any deeper into where it came from, nor into how it got collated. However, putting myself in the shoes of a climate change sceptic, I can imagine that there are plenty of people out there who'd demand the full story, before being willing to digest the data, let alone to accept any conclusions arising from it. And hey, regardless of your political or other leanings, it's a good idea for everyone to conduct some due diligence background research, and to not blindly be believin' (especially not anything that you read online, because I hate to break it to you, but not everything on the internet is true!).

Except this. Believe this!
Except this. Believe this!
Image source: me.me

When I first saw the above graph – of the cherry blossom dates over 1,200 years – in the mainstream media, I assumed that the data points all came from a nice, single, clean, consistent source. Like, you know, a single giant tome, a "cherry blossom codex", that one wizened dude in each generation has been adding a line to, once a year, every year since 812 CE, noting down the date. But even the Japanese aren't quite that meticulous. The real world is messy!

According to the introductory text on the page from Osaka Prefecture University, the pre-modern data points – defined as being from 812 CE to 1880 – were collected:

… from many diaries and chronicles written by Emperors, aristocrats, goveners and monks at Kyoto …

For every data point, the source is listed. Many of the sources yield little or no results in a Google search. For example, try searching (including quotes) for "Yoshidake Hinamiki" (the source for the data points from 1402 and 1403), for which the only results are a handful of academic papers and books in which it's cited, and nothing actually explaining what it is, nor showing more than a few lines of the original text.

Or try searching for "Inryogen Nichiroku" (the source for various data points between 1438 and 1490), which has even fewer results: just the cherry blossom data in question, nothing else! I'm assuming that information about these sources is so limited, mainly due to there being virtually no English-language resources about them, and/or due to the actual titles of the sources having no standard for correct English transliteration. I'm afraid that, since my knowledge of Japanese is close to zilch, I'm unable to search for anything much online in Japanese, let alone for information about esoteric centuries-old texts.

The listed source for the very first data point of 812 CE is the Nihon Kōki. That book, along with the other five books that comprise the Rikkokushi – and all of which are compiled in the Ruijū Kokushi – appears to be one of the more famous sources. It was officially commissioned by the Emperor, and was authored by several statesmen of the imperial court. It appears to be generally considered as a reliable source of information about life in 8th and 9th century Japan.

Japanese literary works go back over a thousand years.
Japanese literary works go back over a thousand years.
Image source: Wikimedia Commons

The data points from 812 CE to 1400 are somewhat sporadic. There are numerous gaps, sometimes of as much as 20 years. Nevertheless, considering the large time scale under study, the data for that period is (in my unqualified layman's opinion) of high enough frequency for it to be statistically useful. The data points from 1400 onwards are more contiguous (i.e. there are far fewer gaps), so there appears to have been a fairly consistent and systematic record-keeping regime in place since then.

How much you want to trust the pre-modern data really depends, I guess, on what your opinion of Japanese civilisation is. When considering that matter, bear in mind that the Imperial House of Japan is believed to be the oldest continuous monarchy in the world, and that going back as far as the 8th century, Japan was already notable for its written works. Personally, I'd be willing to give millenium-old Japanese texts the benefit of the doubt in terms of their accuracy, more than I'd be willing to do so for texts from most other parts of the world from that era.

The man behind this data set, Yasuyuki Aono, is an Associate Professor in Environmental Sciences and Technology at Osaka Prefecture University (not a world-class university, but apparently it's roughly one of the top 20 universities in Japan). He has published numerous articles over his 30+ year career. His 2008 paper: Phenological data series of cherry tree flowering in Kyoto, Japan, and its application to reconstruction of springtime temperatures since the 9th century – the paper which is the primary source of much of the data set – is his seminal work, having been cited over 250 times to date.

So, the data set, the historical sources, and the academic credentials, all have some warts. But, in my opinion, those warts really are on the small side. It seems to me like pretty solid research. And it appears to have all been quite thoroughly peer reviewed, over numerous publications, in numerous different journals, by numerous authors, over many years. You can and should draw your own conclusions, but personally, I declare this data to be trustworthy, and I assert that anyone who doubts its trustworthiness (after conducting an equivalent level of background research to mine) is splitting hairs.

It don't get much simpler

Having got that due diligence out of the way, I hope that even any climate change sceptics out there who happen to have read this far (assuming that any such folk should ever care to read an article like this on a web site like mine) are willing to admit: this cherry blossom data is telling us something!

I was originally hoping to give this article a title that went something like: "Five indisputable bits of climate change evidence". That is, I was hoping to find four or so other bits of evidence as good as this one. But I couldn't! As far as I can tell, there's no other record of any other natural phenomenon on this Earth (climate change related or otherwise), that has been consistently recorded, in writing, more-or-less annually, for the past 1,000+ years. So I had to scrap that title, and just focus on the cherry blossoms.

The annual cherry blossom in Japan is a spectacle of great beauty.
The annual cherry blossom in Japan is a spectacle of great beauty.
Image source: Wikimedia Commons

Apart from the sheer length of the time span, the other thing that makes this such a gem, is the fact that the data in question is so simple. It's just the date of when people saw their favourite flower bloom each year! It's pretty hard to record it wrongly – even a thousand years ago, I think people knew what day of the year it was. It's not like temperature, or any other non-discrete value, that has to be carefully measured, by trained experts, using sensitive calibrated instruments. Any old doofus can write today's date, and get it right. It's not rocket science!

That's why I really am excited about this cherry blossom data being the most indisputable evidence of climate change ever. It's not going back 200 years, it's going back 1,200 years. It's not projected data, it's actual data. It's not measured, it's observed. And it was pretty steady for a whole millenium, before taking a noticeable nosedive in the 20th century. If this doesn't convince you that man-made climate change is real, then you, my friend, have well and truly buried your head in the sand.

]]>
On Tina https://greenash.net.au/thoughts/2021/06/on-tina/ Fri, 25 Jun 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/06/on-tina/ Continuing my foray into the world of Static Site Generators (SSGs), this time I decided to try out one that's quite different: TinaCMS (although Tina itself isn't actually an SSG, it's just an editing toolkit; so, strictly speaking, the SSG that I took for a spin is Next.js). Shiny new toys. The latest and greatest that the JAMstack has to offer. Very much all alpha (I encountered quite a few bugs, and there are still some important features missing entirely). But wow, it really does let you have your cake and eat it too: a fast, dumb, static site when logged out, that transforms into a rich, Git-backed, inline CMS when logged in!

Yes, it's named after that Tina, from Napoleon Dynamite.
Yes, it's named after that Tina, from Napoleon Dynamite.
Image source: Pinterest

Pressing on with my recent tradition of converting old sites of mine from dynamic to static, this time I converted Daydream Believers. I deliberately chose that site, because its original construction with Flask Editable Site had been an experiment, trying to achieve much the same dynamic inline editing experience as that provided by Tina. Plus, the site has been pretty much abandoned by its owners for quite a long time, so (much like my personal sites) there was basically no risk involved in touching it.

To give you a quick run-down of the history, Flask Editable Site was a noble endeavour of mine, about six years ago – the blurb from the demo sums it up quite well:

The aim of this app is to demonstrate that, with the help of modern JS libraries, and with some well-thought-out server-side snippets, it's now perfectly possible to "bake in" live in-place editing for virtually every content element in a typical brochureware site.

This app is not a CMS. On the contrary, think of it as a proof-of-concept alternative to a CMS. An alternative where there's no "admin area", there's no "editing mode", and there's no "preview button".

There's only direct manipulation.

That sounds eerily similar to "the acronym TinaCMS standing for Tina Is Not A CMS" (yes, yet another recursive acronym in the IT world, in the grand tradition of GNU), as explained in the Tina FAQ:

Tina introduces an entirely new paradigm to the content management space, which can make it difficult to grasp. In short, Tina is a toolkit for making your website its own CMS. It's a suite of packages that enables developers to build a customized content management system into the website itself.

(Who knows, maybe Flask Editable Site was one of the things that inspired the guys behind Tina – if so, I'd be flattered – although I don't believe they've heard of it).

Flask Editable Site boasted essentially the same user experience – i.e. that as soon as you log in, everything is editable inline. But the content got saved the old-skool CMS way, in a relational database. And the page(s) got rendered the old-skool CMS way, dynamically at run-time. And all of that required an old-skool deployment, on an actual server running Nginx / PostgreSQL / gunicorn (or equivalents). Plus, the Flask Editable Site inline components didn't look as good as Tina's do out-of-the-box (although I tried my best, I thought they looked half-decent).

So, I rebuilt Daydream Believers in what is currently the recommended Tina way (it's the way the tinacms.org website itself is currently built): TinaCMS running on top of Next.js, and saving content directly to GitHub via its API. Although I didn't use Tina's GitHub media store (which is currently the easiest way to manage images and other media with Tina), I instead wrote an S3 media store for Tina – something that Tina is sorely lacking, and that many other SSGs / headless CMSes already have. I hope to keep working on that draft PR and to get it merged sometime soon. The current draft works, I'm running it in production, but it has some rough edges.

Daydream Believers with TinaCMS editing mode enabled.
Daydream Believers with TinaCMS editing mode enabled.

The biggest hurdle for me, in building my first Tina site, was the fact that a Tina website must be built in React. I've dabbled in React over the past few years, mainly in my full-time job, not particularly by choice. It's rather ironic that this is my first full project built in React, and it's a static website! It's not that I don't like the philosophy or the syntax of React, I'm actually pretty on board with all that (and although I loathe Facebook, I've never held that against React).

It's just that: React is quite a big learning curve; it bloats a web front-end with its gazillion dependencies; and every little thing in the front-end has to be built (or rebuilt) in React, because it doesn't play nicely with any non-React code (e.g. old-skool jQuery) that touches the DOM directly. Anyway, I've now learnt a fair bit of React (still plenty more learning to go); and the finished site seems to load reasonably fast; and I managed to get the JS from the old site playing reasonably nicely with the new site (some via a hacky plonking of old jQuery-based code inside the main React "app" component, and some via rewriting it as actual React code).

TinaCMS isn't really production-ready just yet: I had to fix some issues just to get started with it, including bugs in the official docs and in the beginner guides.

Nevertheless, I'm super impressed with it. This is the kind of delightful user experience that I and many others were trying to build 15+ years ago in Drupal. I've cared about making awesome editable websites for an awfully long time now, and I really am overjoyed to see that awesomeness evolving to a whole new level with Tina.

Compared to the other SSGs that I've used lately – Hugo and Eleventy – Tina (slash Next.js) does have some drawbacks. It's far less mature. It has a slower build time. It doesn't scale as well. The built front-end is fatter. You can't just copy-paste legacy JS into it. You have to accept the complexity cost of React (just to build a static site!). You have to concern yourself with how everything looks in edit mode. Quite a lot of boilerplate code is required for even the simplest site.

You can also accompany traditional SSGs, such as Hugo and Eleventy, with a pretty user-friendly (and free, and SaaS) git-based CMS, such as Forestry (PS: the Forestry guys created Tina) or Netlify CMS. They don't provide any inline editing UI, they just give you a more traditional "admin site". However, they do have pretty good "live preview" functionality. Think of them as a middle ground between a traditional SSG with no editing UI, and Tina with its rich inline editing.

So, would I use Tina again? For a smaller brochureware site, where editing by non-devs needs to be as user-friendly as possible, and where I have the time / money / passion (pick approximately two!) to craft a great experience, sure, I'd love to (once it's matured a bit more). For larger sites (100+ pages), and/or for sites where user-friendly editing isn't that important, I'd probably look elsewhere. Regardless, I'm happy to be on board for the Tina journey ahead.

]]>
Introducing: Is Pacific Highway Upgraded Yet? https://greenash.net.au/thoughts/2021/06/introducing-is-pacific-highway-upgraded-yet/ Tue, 08 Jun 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/06/introducing-is-pacific-highway-upgraded-yet/ Check out this fun little site that I just built: Is Pacific Highway Upgraded Yet?

Spoiler alert: no it's not!
Spoiler alert: no it's not!

I got thinking about this, in light of the government's announcement at the end of 2020 that the Pacific Highway upgrade is finished. I was like, hang on, no it's not! How about a web site to tell people how long we've already been waiting for this (spoiler alert: ages!), and how much longer we'll probably be waiting?

Complete with a countdown timer, which is currently set to 1 Jan 2030, a date that I arbitrarily and fairly optimistically picked as the target completion date of the Hexham bypass (but that project is still in the planning stage, no construction dates have currently been announced).

Fellow Australians, enjoy!

]]>
On Eleventy https://greenash.net.au/thoughts/2021/04/on-eleventy/ Wed, 14 Apr 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/04/on-eleventy/ Following on from my last experiment with Hugo, I decided to dabble in a different static site generator (SSG). This time, Eleventy. I've rebuilt another one of my golden oldies, Jaza's World, using it. And, similarly, source code is up on GitHub, and the site is hosted on Netlify. I'm pleased to say that Eleventy delivered in the areas where Hugo disappointed me most, although there were things about Hugo that I missed.

11ty!
11ty!

First and foremost, Eleventy allows virtually all the custom code you might need. This is in stark contrast to Hugo, with which my biggest gripe was its lack of support for any custom code whatsoever, except for template code. The most basic code hook that Eleventy supports – filters – will get you pretty far: I whipped up some filters for date formatting, for array slicing, for getting parent pages, and for getting subsets of tags. Eleventy's custom collections are also handy: for example, I defined a collection for my nav menu items. I didn't find myself needing to write any Eleventy plugins of my own, but my understanding is that you have access to the same Eleventy API methods in a plugin, as you do in a regular site-level .eleventy.js file.

One of Eleventy's most powerful features is its pagination. It's implemented as a "core plugin" (Pagination.js is the only file in Eleventy core's Plugins directory), but it probably makes sense to just think of it as a core feature, period. Its main use case is, unsurprisingly, for paging a list of content. That is, for generating /articles/, /articles/page/2/, /articles/page/99/, and so on. But it can handle any arbitrary list of data, it doesn't have to be "page content". And it can generate pages based on any permalink pattern, which you can set to not even include a "page number" at all. In this way, Eleventy can generate pages "dynamically" from data! Jaza's World doesn't have a monthly archive, but I could have created one using Eleventy pagination in this way (whereas a dynamically-generated monthly archive is currently impossible in Hugo, so I resorted to just manually defining a page for each month).

Jaza's World migrated to 11ty
Jaza's World migrated to 11ty

Eleventy's pagination still has a few rough edges. In particular, it doesn't (really) currently support "double pagination". That is, /section-foo/parent-bar-generated-by-pagination/child-baz-also-generated-by-pagination/ (although it's the same issue even if parent-bar is generated just by a permalink pattern, without using pagination at that parent level). And I kind of needed that feature, like, badly, for the Gallery section of Jaza's World. So I added support for this to Eleventy myself, by way of letting the pagination key be determined dynamically based on a callback function. As of the time of writing, that PR is still pending review (and so for now, on Jaza's World, I'm running a branch build of Eleventy that contains my change). Hopefully it will get in soon, in which case the enhancement request for double pagination (which is currently one of three "pinned" issues in the Eleventy issue tracker) should be able to be considered fulfilled.

JavaScript isn't my favourite language. I've been avoiding heavy front-end JS coding (with moderate success) for some time, and I've been trying to distance myself from back-end Node.js coding too (with less success). Python has been my language of choice for yonks now. So I'm giving Eleventy a good rap despite it being all JS, not because of it. I like that it's a minimalist JS tool, that it's not tied to any massive framework (such as React), and that it appears to be quite performant (I haven't formally benchmarked it against Hugo, but for my modest needs so far, Eleventy has been on par, it generates Jaza's World with its 500-odd pages in about 2 seconds). And hey, JS is as good a language as any these days, for the kind of script snippets you need when using a static site generator.

Eleventy has come a long way in a short time, but nevertheless, I don't feel that it's worthy yet of being called a really solid tool. Hugo is certainly a more mature piece of software, and a more mature community. In particular, Eleventy feels like a one-man show (Hugo suffers from this too, but it seems to have developed a slightly better contributor base). Kudos to zachleat for all the amazing work he has done and continues to do, but for Eleventy to be sustainable long-term, it needs more of a team.

With Jaza's World, I played around with Eleventy a fair bit, and got a real site built and deployed. But there's more I could do. I didn't bother moving any of my custom code into their own files, nor into separate plugins, I just left them in .eleventy.js. I also didn't bother writing JS unit tests – for a more serious project, what I'd really like to do, is to have tests that run in a CI pipeline (ideally in GitHub Actions), and to only kick off a Netlify deployment once there's a green build (rather than the usual setup of Netlify deploying as soon as the master branch in GitHub is updated).

Site building in Eleventy has been fun, I reckon I'll be doing more of it!

]]>
On Hugo https://greenash.net.au/thoughts/2021/02/on-hugo/ Thu, 11 Feb 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/02/on-hugo/ After having it on my to-do list for several years, I finally got around to trying out a static site generator (SSG). In particular, Hugo. I decided to take Hugo for a spin, by rebuilding one of my golden oldies, Jaza's World Trip, with it. And, for bonus points, I published the source code on GitHub, and I deployed the site on Netlify. Hugo is great software with a great community, however it didn't quite live up to my expectations.

Hugo: fast like a... gopher?
Hugo: fast like a... gopher?
Image source: Hugo

Memory lane

To give you a bit of history: worldtrip was originally built in Drupal (version 4.7), back in 2007. So it started its life as a real, old-school, PHP CMS driven blog. I actually wrote most of the blog entries from internet cafés around the world, by logging in and typing away – often while struggling with a non-English keyboard, a bad internet connection, a sluggish machine, and a malware-infested old Windows. Ah, the memories! And people posted comments too.

Then, in 2014, I converted it to a "static PHP site", which I custom-built. It was static as in "no database" – all the content was in flat files, virtually identical to the "content files" of Hugo and other SSGs – but not quite static as in "plain HTML files". It was still PHP, and so it still needed to be served by a PHP-capable web server (like Apache or Nginx with their various modules).

In retrospect, maybe I should have SSG-ified worldtrip in 2014. But SSGs still weren't much of a thing back then: Hugo was in its infancy; Netlify didn't exist yet; nor did any of the JS-based cool new kids. The original SSG, Jekyll, was around, but it wasn't really on my radar (I didn't try out Jekyll until around 2016, and I never ended up building or deploying a finished site with it). Plus I still wasn't quite ready to end my decade-long love affair with PHP (I finally got out of that toxic relationship for good, a few years later). Nor was I able to yet embrace the idea of hosting a whole web site on anything other than an old-school box: for a decade or so, all I had known was "shared hosting" and VPSes.

Hugo time

So, it's 2021, and I've converted worldtrip yet again, this time to Hugo. It's pretty much unchanged on the surface. The main difference is that the trip photos (both in the "gallery" section, and embedded in blog posts) are now sourced from an S3 bucket instead of from Flickr (I needed to make this change in order to retire my Flickr account). I also converted the location map from a Google map to a Leaflet / Mapbox map (this was also needed, as Google now charges for even the simplest Maps API usage). I could have made those changes without re-building the whole site, but they were a good excuse to do just that.

The Leaflet and Mapbox powered location map.
The Leaflet and Mapbox powered location map.

True to its word, I can attest that Hugo is indeed fast. On my local machine, Hugo generates all of the 2,000+ pages of worldtrip in a little over 2 seconds. And deploying it on Netlify is indeed easy-peasy. And free – including with a custom domain, with SSL, with quite generous bandwidth, with plenty of build minutes – so kudos to Netlify (and I hope they keep on being so generous!).

Hugo had pretty much everything I needed, to make re-building worldtrip a breeze: content types, front matter, taxonomies, menus, customisable URLs, templating (including partials and shortcodes), pagination, and previous / next links. It didn't support absolutely all of worldtrip's features out-of-the-box – but then again, nothing ever does, right? And filling in those remaining gaps was going to be easy, right?

As it turns out, building custom functionality in Hugo is quite hard.

The first pain point that I hit, was worldtrip's multi-level (country / city) taxonomy hierarchy. Others have shared their grief with this, and I shared mine there too. I got it working, but only by coding way more logic into a template than should have been necessary, and by abusing the s%#$ out of Hugo templating's scratch feature. The resulting partial template is an unreadable abomination. It could have been a nice, clean, testable function (and it previously was one, in PHP), were I able to write any actual code in a Hugo site (in Go or in any other language). But no, you can't write actual code in a Hugo site, you can only write template logic.

Update: I just discovered that support for return'ing a value of any type (rather than just rendering a string) was added to Hugo a few years back (and is documented, albeit rather tersely). So I could rely on Hugo's scratch a bit less, if I were to instead return the countries / cities array. But the logic still has to live in a template!

Same with the tag cloud. It's not such a big deal, it's a problem that various people have solved at the template level, and I did so too. What I did for weighted tags isn't totally illegible. But again, it was previously (pre-Hugo) implemented as a nice actual function in code, and now it's shoved into template logic.

The weighted tag cloud.
The weighted tag cloud.

The photo gallery was cause for considerable grief too. Because I didn't want an individual page to be rendered for each photo, my first approach was to define the gallery items in data files. But I needed the listing to be paginated, and I soon discovered that Hugo's pagination only supports page collections, not arbitrary lists of data (why?!). So, take two, I defined them as headless bundles. But it just so happens that listing headless bundles (as opposed to just retrieving a single one) is a right pain, and if you're building up a list of them and then paginating that list, it's also hacky and very inefficient (when I tried it, my site took 4x longer to build, because it was calling readDir on the whole photo directory, for each paginated chunk).

Finally, I stumbled across Hugo's (quite new) "no render" feature, and I was able to define and paginate my gallery items (without having a stand-alone page for each photo) in an efficient and non-hacky way, by specifying the build options render = "never" and list = "local". I also fixed a bug in Hugo itself (my first tiny bit of code written in golang!), to exclude "no render" pages from the sitemap (as of writing, the fix has been merged but not included in a stable Hugo release), thus making it safe(r) to specify list = "always" (which you might need, instead of list = "local", if you want to list your items anywhere else on the site, other than on their parent page). So, at least with the photo gallery – in contrast to my other above-mentioned Hugo pain points – I'm satisfied with the end result. Nevertheless, a more-than-warranted amount of hair tearing-out did occur.

The worldtrip monthly archive wasn't particularly hard to implement, thanks to this guide that I followed quite closely. But I was disappointed that I had to create a physical "page" content file for each month, in order for Hugo to render it. Because guess what, Hugo doesn't have built-in support for chronological archive pages! And because, since Hugo offers no real mechanism for you to write code anywhere to (programmatically) render such pages, you just have to hack around that limitation. I didn't do what the author of that guide did (he added a stand-alone Node.js script to generate more archive "page" content files when needed), because worldtrip is a retired site that will never need more such pages generated, and because I'd rather avoid having totally-separate-to-Hugo build scripts like that. The monthly archive templates also contain more logic than they ideally should.

The monthly archive index page.
The monthly archive index page.

Mixed feelings

So, I succeeded in migrating worldtrip to Hugo. I can pat myself on the back, job well done, jolly good old chap. I don't regret having chosen Hugo: it really is fast; it's a well-written (to my novice golang eyes) and well-maintained open-source codebase; it boasts an active dev and support community; its documentation is of a high standard; and it comes built-in with 95% of the functionality that any static site could possibly need.

I wanted, and I still want, to love Hugo, for those reasons. And just because it's golang (which I have vaguely been wanting to learn lately … although I have invested time in learning the basics of other languages over the past several years, namely Erlang and Rust). And because it really seems like the best-in-breed among SSGs: it's focused on the basics of HTML page generation (not trying to "solve React for static sites", or other such nonsense, at the same time); it takes performance and scalability seriously; and it fosters a good dev, design, and content authoring experience.

However, it seems that, by design, it's completely impossible to write custom code in an actual programming language (not in a presentation-layer template), that's hooked in to Hugo in any way (apart from by hacking Hugo core). I don't mind that Hugo is opinionated. Many great pieces of software are opinionated – Django, for example.

But Django is far more flexible: you can programmatically render any page, with any URL, that takes your fancy; you can move any presentational logic you want into procedural code (usually either in the view layer, to populate template variables, or in custom template tags), to keep your templates simple; and you can model your data however you want (so you're free to implement something like a multi-level taxonomy yourself – although I admit that this isn't a fair apples vs apples comparison, as Django data is stored in a database). I realise that Django – and Rails, and Drupal, and WordPress – all generate dynamic sites; but that's no excuse, an SSG can and should allow the same level of flexibility via custom code.

Hugo is somewhat lacking in flexibility.
Hugo is somewhat lacking in flexibility.
Image source: pixabay

There has been some (but not that much) discussion about supporting custom code in Hugo (mainly for the purpose of fetching and parsing custom data, but potentially for more things). There are technical challenges (mainly related to Go being a compiled language), but it would be possible (not necessarily in Go, various other real programming languages have been suggested). Also some mention of custom template functions (that thread is already quite old though). Nothing has been agreed upon or built to date. I for one will certainly watch this space.

For my next static site endeavour, at least, I think I'll take a different SSG for a spin. I'm thinking Eleventy, which appears to allow a lot of custom code, albeit all JS. (And my next project will be a migration of another of my golden oldies, most likely Jaza's World, which has a similar tech history to worldtrip).

Will I use Hugo again? Probably. Will I contribute to Hugo more? If I have time, and if I have itches to scratch, then sure. However, I'm a dev, and I like to code. And Hugo, despite having so much going for it, seems to be completely geared towards people who aren't devs, and who just want to publish content. So I don't see myself ever feeling "right at home" with Hugo.

]]>
Private photo collections with AWSPics https://greenash.net.au/thoughts/2021/02/private-photo-collections-with-awspics/ Tue, 02 Feb 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/02/private-photo-collections-with-awspics/ I've created a new online home for my formidable collection of 25,000 personal photos. They now all live in an S3 bucket, and are viewable in a private gallery powered by the open-source AWSPics. In general, I'm happy with the new setup.

For the past 15 years, I have painstakingly curated and organised my photos on Flickr. I have no complaints or regrets: Flickr was and still is a fantastic service, and in its heyday it was ahead of its time. However, after 15 years as a loyal Pro member, it's with bittersweet reluctance that I've decided to cancel my Flickr account. The main reason for my parting ways with Flickr, is that its price has increased (and is continuing to increase), quite significantly of late, after being set in stone for many years.

I also just wanted to build (and felt that I was quite overdue in building) a photo solution crafted (at least partially) with my own hands, and that I fully control, rather than just letting SaaS do all the work for me. Similarly, even though I've always trusted and I still trust Flickr with my data, I wanted to migrate my photos to a storage back-end that I own and manage myself, and an S3 bucket is just that (at the least, IaaS is closer to that ideal than SaaS is).

I had never made any of my personal photos private, although I always could have, back in the Flickr days. I never felt that it was necessary. I was young and free, and the photos were all of me hanging out with my friends, and/or gallivanting around the world with other carefree backpackers. But I'm at a different stage of my life now. These days, the photos are all of my kids, and so publishing them for the whole world to see is somewhat less appropriate. And AWSPics makes them all private by default. So, private it is.

Many thanks to jpsim for building AWSPics, it's a great little stack. AWSPics had nearly everything I needed, when I stumbled across it about 3 months ago, and I certainly could have used it as-is, no yours-truly dev required. But, me being a fastidious ol' dev, and it being open-source, naturally I couldn't help but add a few bells and whistles to it. In particular, I scratched my own itch by building support for collections of albums, so that I could preserve the three-level hierarchy of Collections -> Albums -> Pictures that I used religiously on Flickr. I also wrote a whole lot of unit tests for the AWSPics site builder (which is a Node.js Lambda function), before making any changes, to ensure that I didn't break existing functionality. Other than that, I just submitted a few minor bug fixes.

I'm not planning on enhancing AWSPics a whole lot more. It works for my humble needs. I'm a dev, not a designer, nor a photographer. Although 25,000 photos is a lot (and growing), and I feel like I'm pushing the site builder Lambda a bit close to its limits at the moment (it takes over a minute to run, and ideally a Lambda function completes within a few seconds). Adding support for partial site rebuilds (i.e. only rebuild specific albums or collections) would resolve that. Plus I'm sure there are a few more minor bits and pieces I could work on, should I have the time and the inclination.

Well, that's all I have to say about that. Just wanted to formally announce that shift that my photo collection has made, and to give kudos where it's deserved.

]]>
Good devs care about code https://greenash.net.au/thoughts/2021/01/good-devs-care-about-code/ Thu, 28 Jan 2021 00:00:00 +0000 https://greenash.net.au/thoughts/2021/01/good-devs-care-about-code/ Theories abound regarding what makes a good dev. These theories generally revolve around one or more particular skills (both "hard" and "soft"), and levels of proficiency in said skills, that are "must-have" in order for a person to be a good dev. I disagree with said theories. I think that there's only one thing that makes a good dev, and it's not a skill at all. It's an attitude. A good dev cares about code.

There are many aspects of code that you can care about. Formatting. Modularity. Meaningful naming. Performance. Security. Test coverage. And many more. Even if you care about just one of these, then: (a) I salute you, for you are a good dev; and (b) that means that you're passionate about code, which in turn means that you'll care about more aspects of code as you grow and mature, which in turn means that you'll develop more of them there skills, as a natural side effect. The fact that you care, however, is the foundation of it all.

Put your hands in the air like you just don't care.
Put your hands in the air like you just don't care.
Image source: TripAdvisor

If you care about code, then code isn't just a means to an end: it's an end unto itself. If you truly don't care about code at all, but only what it accomplishes, then not only are you not a good dev, you're not really a dev at all. Which is OK, not everyone has to be a dev. If what you actually care about is that the "Unfranked Income YTD" value is accurate, then you're probably a (good) accountant. If it's that the sidebar is teal, then you're probably a (good) graphic designer. If it's that national parks are distinguishable from state forests at most zoom levels, then you're probably a (good) cartographer. However, if you copy-pasted and cobbled together snippets of code to reach your goal, without properly reading or understanding or caring about the content, then I'm sorry, but you're not a (good) dev.

Of course, a good dev needs at least some "hard" skills too. But, as anyone who has ever interviewed or worked with a dev knows, those skills – listed so prominently on CVs and in JDs – are pretty worthless if there's no quality included. Great, 10 years of C++ experience! And you've always given all variables one-character names? Great, you know Postgres! But you never add an index until lots of users complain that a page is slow? Great, a Python ninja! What's that, you just write one test per piece of functionality, and it's a Selenium test? Call me harsh, but those sound to me like devs who just don't care.

"Soft" skills are even easier to rattle off on CVs and in JDs, and are worth even less if accompanied by the wrong attitude. Conversely, if a dev has the right attitude, then these skills flourish pretty much automatically. If you care about the code you write, then you'll care about documentation in wiki pages, blog posts, and elsewhere. You'll care about taking the initiative in efforts such as refactoring. You'll care about collaborating with your teammates more. You'll care enough to communicate with your teammates more. "Caring" is the biggest and the most important soft skill of them all!

Plus Jamiroquai dancing skills.
Plus Jamiroquai dancing skills.
Image source: Rick Kuwahara

Formal education in programming (from a university or elsewhere) certainly helps with developing your skills, and it can also start you on your journey of caring about code. But you can find it in yourself to care, and you can learn all the tools of the trade, without any formal education. Many successful and famous programmers are proof of that. Conversely, it's possible to have a top-notch formal education up your sleeve, and to still not actually care about code.

It's frustrating when I encounter code that the author clearly didn't care about, at least not in the same ways that I care. For example, say I run into a thousand-line function. Argh, why didn't they break it up?! It might bother me first and foremost because I'm the poor sod who has to modify that code, 5 years later; that is, now it's my problem. But it would also sadden me, because I (2021 me, at least!) would have cared enough to break it up (or at least I'd like to think so), whereas that dev at that point in time didn't care enough to make the effort. (Maybe that dev was me 5 years ago, in which case I'd be doubly disappointed, although wryly happy that present-day me has a higher care factor).

Some aspects of code are easy to start caring about. For example, meaningful naming. You can start doing it right now, no skills required, except common sense. You can, and should, make this New Year's resolution: "I will not name any variable, function, class, file, or anything else x, I will instead name it num_bananas_in_tummy"! Then follow through on that, and the world will be a better place. Amen.

Others are more challenging. For example, test coverage. You need to first learn how to write and run tests in one or more programming languages. That has gotten much easier over the past few decades, depending on the language, but it's still a learning curve. You also need to learn the patterns of writing good tests (which can be a whole specialised career in itself). Plus, you need to understand why tests (particularly unit tests), and test coverage, are important at all. Only then can you start caring. I personally didn't start writing or caring about tests until relatively recently, so I empathise with those of you who haven't yet got there. I hope to see you soon on the other side.

I suspect that this theory of mine applies in much the same way, to virtually all other professions in the world. Particularly professions that involve craftsmanship, but other professions too. Good pharmacists actually care about chemical compounds. Good chefs actually care about fresh produce. Good tailors actually care about fabrics. Good builders actually care about bricks. It's not enough to just care about the customers. It's not enough to just care about the end product. And it's certainly not enough to just care about the money. In order to truly excel at your craft, you've got to actually care about the raw material.

I really do.
I really do.
Image source: Brainless Tales

I'm not writing this as an attack on anyone that I know, or that I've worked with, or whose code I've seen. In fact, I've been fortunate in that almost all fellow devs with whom I have crossed paths, are folks who have demonstrated that they care, and who are therefore, in my humble opinion, good devs. And I'm not trying to make myself out to be the patron saint of caring about code, either. Sorry if I sound patronising in this article. I'm not perfect any more than anyone else is. Plenty of people care more than I do. And different people care about different things. And we're all on a journey: I cared about less aspects of code 10 years ago, than I do now; and I hope to care about more aspects of code than I do today, 10 years in the future.

]]>
Tolstoy: the forgotten philosopher https://greenash.net.au/thoughts/2020/10/tolstoy-the-forgotten-philosopher/ Sun, 11 Oct 2020 00:00:00 +0000 https://greenash.net.au/thoughts/2020/10/tolstoy-the-forgotten-philosopher/ I recently finished reading the classic novel War and Peace. The 19th-century epic is considered the masterpiece of Leo Tolstoy, and I must say it took me by surprise. In particular, I wasn't expecting its second epilogue, which is a distinct work of its own (and one that arguably doesn't belong in a novel): a philosophical essay discussing the question of "free will vs necessity". I know that the second epilogue isn't to everyone's taste, but personally I feel that it's a real gem.

I was also surprised to learn, after doing a modest bit of research, that Tolstoy is seldom mentioned amongst any of the prominent figures in philosophy or metaphysics over the past several centuries. The only articles that even deign to label Tolstoy as a philosopher, are ones that are actually more concerned with Tolstoy as a cult-inspirer, as a pacifist, and as an anarchist.

So, while history has been just and generous in venerating Tolstoy as a novelist, I feel that his contribution to the field of philosophy has gone unacknowledged. This is no doubt in part because Tolstoy didn't consider himself a philosopher, and because he didn't pen any purely philosophical works (published separately from novels and other works), and because he himself criticised the value of such works. Nevertheless, I feel warranted in asking: is Tolstoy a forgotten philosopher?

Tolstoy statue in British Columbia
Tolstoy statue in British Columbia
Image source: Waymarking

Free will in War and Peace

The concept of free will that Tolstoy articulates in War and Peace (particularly in the second epilogue), in a nutshell, is that there are two forces that influence every decision at every moment of a person's life. The first, free will, is what resides within a person's mind (and/or soul), and is what drives him/her to act per his/her wishes. The second, necessity, is everything that resides external to a person's mind / soul (that is, a person's body is also for the most part considered external), and is what strips him/her of choices, and compels him/her to act in conformance with the surrounding environment.

Whatever presentation of the activity of many men or of an individual we may consider, we always regard it as the result partly of man's free will and partly of the law of inevitability.

War and Peace, second epilogue, chapter IX

A simple example that would appear to demonstrate acting completely according to free will: say you're in an ice cream parlour (with some friends), and you're tossing up between getting chocolate or hazelnut. There's no obvious reason why you would need to eat one flavour vs another. You're partial to both. They're both equally filling, equally refreshing, and equally (un)healthy. You'll be able to enjoy an ice cream with your friends regardless. You're free to choose!

You say: I am not and am not free. But I have lifted my hand and let it fall. Everyone understands that this illogical reply is an irrefutable demonstration of freedom.

War and Peace, second epilogue, chapter VIII

And another simple example that would appear to demonstrate being completely overwhelmed by necessity: say there's a gigantic asteroid on a collision course for Earth. It's already entered the atmosphere. You're looking out your window and can see it approaching. It's only seconds until it hits. There's no obvious choice you can make. You and all of humanity are going to die very soon. There's nothing you can do!

A sinking man who clutches at another and drowns him; or a hungry mother exhausted by feeding her baby, who steals some food; or a man trained to discipline who on duty at the word of command kills a defenseless man – seem less guilty, that is, less free and more subject to the law of necessity, to one who knows the circumstances in which these people were placed …

War and Peace, second epilogue, chapter IX

Decisions decisions
Decisions decisions
Image source: Wikimedia Commons

However, the main point that Tolstoy makes regarding these two forces, is that neither of them does – and indeed, neither of them can – ever exist in absolute form, in the universe as we know it. That is to say, a person is never (and can never be) free to decide anything 100% per his/her wishes; and likewise, a person is never (and can never be) shackled such that he/she is 100% compelled to act under the coercion of external agents. It's a spectrum! And every decision, at every moment of a person's life (and yes, every moment of a person's life involves a decision), lies somewhere on that spectrum. Some decisions are made more freely, others are more constrained. But all decisions result from a mix of the two forces.

In neither case – however we may change our point of view, however plain we may make to ourselves the connection between the man and the external world, however inaccessible it may be to us, however long or short the period of time, however intelligible or incomprehensible the causes of the action may be – can we ever conceive either complete freedom or complete necessity.

War and Peace, second epilogue, chapter X

So, going back to the first example: there are always some external considerations. Perhaps there's a little bit more chocolate than hazelnut in the tubs, so you'll feel just that little bit guilty if you choose the hazelnut, that you'll be responsible for the parlour running out of it, and for somebody else missing out later. Perhaps there's a deal that if you get exactly the same ice cream five times, you get a sixth one free, and you've already ordered chocolate four times before, so you feel compelled to order it again this time. Or perhaps you don't really want an ice cream at all today, but you feel that peer pressure compels you to get one. You're not completely free after all!

If we consider a man alone, apart from his relation to everything around him, each action of his seems to us free. But if we see his relation to anything around him, if we see his connection with anything whatever – with a man who speaks to him, a book he reads, the work on which he is engaged, even with the air he breathes or the light that falls on the things about him – we see that each of these circumstances has an influence on him and controls at least some side of his activity. And the more we perceive of these influences the more our conception of his freedom diminishes and the more our conception of the necessity that weighs on him increases.

War and Peace, second epilogue, chapter IX

And, going back to the second example: you always have some control over your own destiny. You have but a few seconds to live. Do you cower in fear, flat on the floor? Do you cling to your loved one at your side? Do you grab a steak knife and hurl it defiantly out the window at the approaching asteroid? Or do you stand there, frozen to the spot, staring awestruck at the vehicle of your impending doom? It may seem pointless, weighing up these alternatives, when you and your whole world are about to be pulverised; but aren't your last moments in life, especially if they're desperate last moments, the ones by which you'll be remembered? And how do you know for certain that there will be nobody left to remember you (and does that matter anyway)? You're not completely bereft of choices after all!

… even if, admitting the remaining minimum of freedom to equal zero, we assumed in some given case – as for instance in that of a dying man, an unborn babe, or an idiot – complete absence of freedom, by so doing we should destroy the very conception of man in the case we are examining, for as soon as there is no freedom there is also no man. And so the conception of the action of a man subject solely to the law of inevitability without any element of freedom is just as impossible as the conception of a man's completely free action.

War and Peace, second epilogue, chapter X

Background story

Tolstoy's philosophical propositions in War and Peace were heavily influenced by the ideas of one of his contemporaries, the German philosopher Arthur Schopenhauer. In later years, Tolstoy candidly expressed his admiration for Schopenhauer, and he even went so far as to assert that, philosophically speaking, War and Peace was a repetition of Schopenhauer's seminal work The World as Will and Representation.

Schopenhauer's key idea, was that the whole universe (at least, as far as any one person is concerned) consists of two things: the will, which doesn't exist in physical form, but which is the essence of a person, and which contains all of one's drives and desires; and the representation, which is a person's mental model of all that he/she has sensed and interacted with in the physical realm. However, rather than describing the will as the engine of one's freedom, Schopenhauer argues that one is enslaved by the desires imbued in his/her will, and that one is liberated from the will (albeit only temporarily) by aesthetic experience.

Schopenhauer: big on grey tufts, small on optimism
Schopenhauer: big on grey tufts, small on optimism
Image source: 9gag

Schopenhauer's theories were, in turn, directly influenced by those of Immanuel Kant, who came a generation before him, and who is generally considered the greatest philosopher of the modern era. Kant's ideas (and his works) were many (and I have already written about Kant's ideas recently), but the one of chief concern here – as expounded primarily in his Critique of Pure Reason – was that there are two realms in the universe: the phenomenal, that is, the physical, the universe as we experience and understand it; and the noumenal, that is, a theoretical non-material realm where everything exists as a "thing-in-itself", and about which we know nothing, except for what we are able to deduce via practical reason. Kant argued that the phenomenal realm is governed by absolute causality (that is, by necessity), but that in the noumenal realm there exists absolute free will; and that the fact that a person exists in both realms simultaneously, is what gives meaning to one's decisions, and what makes them able to be measured and judged in terms of ethics.

We can trace the study of free will further through history, from Kant, back to Hume, to Locke, to Descartes, to Augustine, and ultimately back to Plato. In the writings of all these fine folks, over the millennia, there can be found common concepts such as a material vs an ideal realm, a chain of causation, and a free inner essence. The analysis has become ever more refined with each passing generation of metaphysics scholars, but ultimately, it has deviated very little from its roots in ancient times.

It's unique

There are certainly parallels between Tolstoy's War and Peace, and Schopenhauer's The World as Will and Representation (and, in turn, with other preceding works), but I for one disagree that the former is a mere regurgitation of the latter. Tolstoy is selling himself short. His theory of free will vs necessity is distinct from that of Schopenhauer (and from that of Kant, for that matter). And the way he explains his theory – in terms of a "spectrum of free-ness" – is original as far as I'm aware, and is laudable, if for no other reason, simply because of how clear and easy-to-grok it is.

It should be noted, too, that Tolstoy's philosophical views continued to evolve significantly, later in his life, years after writing War and Peace. At the dawn of the 1900s (by which time he was an old man), Tolstoy was best known for having established his own "rational" version of Christianity, which rejected all the rituals and sacraments of the Orthodox Church, and which gained a cult-like following. He also adopted the lifestyle choices – extremely radical at the time – of becoming vegetarian, of renouncing violence, and of living and dressing like a peasant.

Battle of Austerlitz
Battle of Austerlitz
Image source: Flickr

War and Peace is many things. It's an account of the Napoleonic Wars, its bloody battles, its geopolitik, and its tremendous human cost. It's a nostalgic illustration of the old Russian aristocracy – a world long gone – replete with lavish soirees, mountains of servants, and family alliances forged by marriage. And it's a tenderly woven tapestry of the lives of the main protagonists – their yearnings, their liveliest joys, and their deepest sorrows – over the course of two decades. It rightly deserves the praise that it routinely receives, for all those elements that make it a classic novel. But it also deserves recognition for the philosophical argument that Tolstoy peppers throughout the text, and which he dedicates the final pages of the book to making more fully fledged.

]]>
How can we make AI that reasons? https://greenash.net.au/thoughts/2019/03/how-can-we-make-ai-that-reasons/ Sat, 23 Mar 2019 00:00:00 +0000 https://greenash.net.au/thoughts/2019/03/how-can-we-make-ai-that-reasons/ The past decade or so has been touted as a high point for achievements in Artificial Intelligence (AI). For the first time, computers have demonstrated formidable ability in such areas as image recognition, speech recognition, gaming, and (most recently) autonomous driving / piloting. Researchers and companies that are heavily invested in these technologies, at least, are in no small way lauding these successes, and are giving us the pitch that the current state-of-the-art is nothing less than groundbreaking.

However, as anyone exposed to the industry knows, the current state-of-the-art is still plagued by fundamental shortcomings. In a nutshell, the current generation of AI is characterised by big data (i.e. a huge amount of sample data is needed in order to yield only moderately useful results), big hardware (i.e. a giant amount of clustered compute resources is needed, again in order to yield only moderately useful results), and flawed algorithms (i.e. algorithms that, at the end of the day, are based on statistical analysis and not much else – this includes the latest Convolutional Neural Networks). As such, the areas of success (impressive though they may be) are still dwarfed by the relative failures, in areas such as natural language conversation, criminal justice assessment, and art analysis / art production.

In my opinion, if we are to have any chance of reaching a higher plane of AI – one that demonstrates more human-like intelligence – then we must lessen our focus on statistics, mathematics, and neurobiology. Instead, we must turn our attention to philosophy, an area that has traditionally been neglected by AI research. Only philosophy (specifically, metaphysics and epistemology) contains the teachings that we so desperately need, regarding what "reasoning" means, what is the abstract machinery that makes reasoning possible, and what are the absolute limits of reasoning and knowledge.

What is reason?

There are many competing theories of reason, but the one that I will be primarily relying on, for the rest of this article, is that which was expounded by 18th century philosopher Immanuel Kant, in his Critique of Pure Reason and other texts. Not everyone agrees with Kant, however his is generally considered the go-to doctrine, if for no other reason (no pun intended), simply because nobody else's theories even come close to exploring the matter in such depth and with such thoroughness.

Immanuel Kant's head (lots of philosophy inside)
Immanuel Kant's head (lots of philosophy inside)
Image source: Wikimedia Commons

One of the key tenets of Kant's work, is that there are two distinct types of propositions: an analytic proposition, which can be universally evaluated purely by considering the meaning of the words in the statement; and a synthetic proposition, which cannot be universally evaluated, because its truth-value depends on the state of the domain in question. Further, Kant distinguishes between an a priori proposition, which can be evaluated without any sensory experience; and an a posteriori proposition, which requires sensory experience in order to be evaluated.

So, analytic a priori statements are basically tautologies: e.g. "All triangles have three sides" – assuming the definition of a triangle (a 2D shape with three sides), and assuming the definition of a three-sided 2D shape (a triangle), this must always be true, and no knowledge of anything in the universe (except for those exact rote definitions) is required.

Conversely, synthetic a posteriori statements are basically unprovable real-world observations: e.g. "Neil Armstrong landed on the Moon in 1969" – maybe that "small step for man" TV footage is real, or maybe the conspiracy theorists are right and it was all a hoax; and anyway, even if your name was Buzz Aldrin, and you had seen Neil standing there right next to you on the Moon, how could you ever fully trust your own fallible eyes and your own fallible memory? It's impossible for there to be any logical proof for such a statement, it's only possible to evaluate it based on sensory experience.

Analytic a posteriori statements, according to Kant, are impossible to form.

Which leaves what Kant is most famous for, his discussion of synthetic a priori statements. An example of such a statement is: "A straight line between two points is the shortest". This is not a tautology – the terms "straight line between two points" and "shortest" do not define each other. Yet the statement can be universally evaluated as true, purely by logical consideration, and without any sensory experience. How is this so?

Kant asserts that there are certain concepts that are "hard-wired" into the human mind. In particular, the concepts of space, time, and causality. These concepts (or "forms of sensibility", to use Kant's terminology) form our "lens" of the universe. Hence, we are able to evaluate statements that have a universal truth, i.e. statements that don't depend on any sensory input, but that do nevertheless depend on these "intrinsic" concepts. In the case of the above example, it depends on the concept of space (two distinct points can exist in a three-dimensional space, and the shortest distance between them must be a straight line).

Another example is: "Every event has a cause". This is also universally true; at least, it is according to the intrinsic concepts of time (one event happens earlier in time, and another event happens later in time), and causality (events at one point in space and time, affect events at a different point in space and time). Maybe it would be possible for other reasoning entities (i.e. not humans) to evaluate these statements differently, assuming that such entities were imbued with different "intrinsic" concepts. But it is impossible for a reasoning human to evaluate those statements any other way.

The actual machinery of reasoning, as Kant explains, consists of twelve "categories" of understanding, each of which has a corresponding "judgement". These categories / judgements are essentially logic operations (although, strictly speaking, they predate the invention of modern predicate logic, and are based on Aristotle's syllogism), and they are as follows:

Group Categories / Judgements
Quantity Unity
Universal
All trees have leaves
Plurality
Particular
Some dogs are shaggy
Totality
Singular
This ball is bouncy
Quality Reality
Affirmative
Chairs are comfy
Negation
Negative
No spoons are shiny
Limitation
Infinite
Oranges are not blue
Relation Inherence / Subsistence
Categorical
Happy people smile
Causality / Dependence
Hypothetical
If it's February, then it's hot
Community
Disjunctive
Potatoes are baked or fried
Modality Existence
Assertoric
Sharks enjoy eating humans
Possibility
Problematic
Beer might be frothy
Necessity
Apodictic
6 times 7 equals 42

The cognitive mind is able to evaluate all of the above possible propositions, according to Kant, with the help of the intrinsic concepts (note that these intrinsic concepts are not considered to be "innate knowledge", as defined by the rationalist movement), and also with the help of the twelve categories of understanding.

Reason, therefore, is the ability to evaluate arbitrary propositions, using such cognitive faculties as logic and intuition, and based on understanding and sensibility, which are bridged by way of "forms of sensibility".

AI with intrinsic knowledge

If we consider existing AI with respect to the above definition of reason, it's clear that the capability is already developed maturely in some areas. In particular, existing AI – especially Knowledge Representation (KR) systems – has no problem whatsoever with formally evaluating predicate logic propositions. Existing AI – especially AI based on supervised learning methods – also excels at receiving and (crudely) processing large amounts of sensory input.

So, at one extreme end of the spectrum, there are pure ontological knowledge-base systems such as Cyc, where virtually all of the input into the system consists of hand-crafted factual propositions, and where almost none of the input is noisy real-world raw data. Such systems currently require a massive quantity of carefully curated facts to be on hand, in order to make inferences of fairly modest real-world usefulness.

Then, at the other extreme, there are pure supervised learning systems such as Google's NASNet, where virtually all of the input into the system consists of noisy real-world raw data, and where almost none of the input is human-formulated factual propositions. Such systems currently require a massive quantity of raw data to be on hand, in order to perform classification and regression tasks whose accuracy varies wildly depending on the target data set.

What's clearly missing, is something to bridge these two extremes. And, if transcendental idealism is to be our guide, then that something is "forms of sensibility". The key element of reason that humans have, and that machines currently lack, is a "lens" of the universe, with fundamental concepts of the nature of the universe – particularly of space, time, and causality – embodied in that lens.

Space and time
Space and time
Image source: Forbes

What fundamental facts about the universe would a machine require, then, in order to have "forms of sensibility" comparable to that of a human? Well, if we were to take this to the extreme, then a machine would need to be imbued with all the laws of mathematics and physics that exist in our universe. However, let's assume that going to this extreme is neither necessary nor possible, for various reasons, including: we humans are probably only imbued with a subset of those laws (the ones that apply most directly to our everyday existence); it's probably impossible to discover the full set of those laws; and, we will assume that, if a reasoning entity is imbued only with an appropriate subset of those laws, then it's possible to deduce the remainder of the laws (and it's therefore also possible to deduce all other facts relating to observable phenomena in the universe).

I would, therefore, like to humbly suggest, in plain English, what some of these fundamental facts, suitable for comprising the "forms of sensibility" of a reasoning machine, might be:

  • There are four dimensions: three space dimensions, and one time dimension
  • An object exists if it occupies one or more points in space and time
  • An object exists at zero or one points in space, given a particular point in time
  • An object exists at zero or more points in time, given a particular point in space
  • An event occurs at one point in space and time
  • An event is caused by one or more different events at a previous point in time
  • Movement is an event that involves an object changing its position in space and time
  • An object can observe its relative position in, and its movement through, space and time, using the space concepts of left, right, ahead, behind, up, and down, and using the time concepts of forward and backward
  • An object can move in any direction in space, but can only move forward in time

I'm not suggesting that the above list is really a sufficient number of intrinsic concepts for a reasoning machine, nor that all of the above facts are the correct choice nor correctly worded for such a list. But this list is a good start, in my opinion. If an "intelligent" machine were to be appropriately imbued with those facts, then that should be a sufficient foundation for it to evaluate matters of space, time, and causality.

There are numerous other intrinsic aspects of human understanding that it would also, arguably, be essential for a reasoning machine to possess. Foremost of these is the concept of self: does AI need a hard-wired idea of "I"? Other such concepts include matter / substance, inertia, life / death, will, freedom, purpose, and desire. However, it's a matter of debate, rather than a given, whether each of these concepts is fundamental to the foundation of human-like reasoning, or whether each of them is learned and acquired as part of intellectual experience.

Reasoning AI

A machine as discussed so far is a good start, but it's still not enough to actually yield what would be considered human-like intelligence. Cyc, for example, is an existing real-world system that basically already has all these characteristics – it can evaluate logical propositions of arbitrary complexity, based on a corpus (a much larger one than my humble list above) of intrinsic facts, and based on some sensory input – yet no real intelligence has emerged from it.

One of the most important missing ingredients, is the ability to hypothesise. That is, based on the raw sensory input of real-world phenomena, the ability to observe a pattern, and to formulate a completely new, original proposition expressing that pattern as a rule. On top of that, it includes the ability to test such a proposition against new data, and, when the rule breaks, to modify the proposition such that the rule can accommodate that new data. That, in short, is what is known as deductive reasoning.

A child formulates rules in this way. For example, a child observes that when she drops a drinking glass, the glass shatters the moment that it hits the floor. She drops a glass in this way several times, just for fun (plenty of fun for the parents too, naturally), and observes the same result each time. At some point, she formulates a hypothesis along the lines of "drinking glasses break when dropped on the floor". She wasn't born knowing this, nor did anyone teach it to her; she simply "worked it out" based on sensory experience.

Some time later, she drops a glass onto the floor in a different room of the house, still from shoulder-height, but it does not break. So she modifies the hypothesis to be "drinking glasses break when dropped on the kitchen floor" (but not the living room floor). But then she drops a glass in the bathroom, and in that case it does break. So she modifies the hypothesis again to be "drinking glasses break when dropped on the kitchen or the bathroom floor".

But she's not happy with this latest hypothesis, because it's starting to get complex, and the human mind strives for simple rules. So she stops to think about what makes the kitchen and bathroom floors different from the living room floor, and realises that the former are hard (tiled), whereas the latter is soft (carpet). So she refines the hypothesis to be "drinking glasses break when dropped on a hard floor". And thus, based on trial-and-error, and based on additional sensory experience, the facts that comprise her understanding of the world have evolved.

Broken glass on the floor
Broken glass on the floor
Image source: CoreSight

Some would argue that current state-of-the-art AI is already able to formulate rules, by way of feature learning (e.g. in image recognition). However, a "feature" in a neural network is just a number, either one directly taken from the raw data, or one derived based on some sort of graph function. So when a neural network determines the "features" that correspond to a duck, those features are just numbers that represent the average outline of a duck, the average colour of a duck, and so on. A neural network doesn't formulate any actual facts about a duck (e.g. "ducks are yellow"), which can subsequently be tested and refined (e.g. "bath toy ducks are yellow"). It just knows that if the image it's processing has a yellowish oval object occupying the main area, there's a 63% probability that it's a duck.

Another faculty that the human mind possesses, and that AI currently lacks, is intuition. That is, the ability to reach a conclusion based directly on sensory input, without resorting to logic as such. The exact definition of intuition, and how it differs from instinct, is not clear (in particular, both are sometimes defined as a "gut feeling"). It's also unclear whether or not some form of intuition is an essential ingredient of human-like intelligence.

It's possible that intuition is nothing more than a set of rules, that get applied either before proper logical reasoning has a chance to kick in (i.e. "first resort"), or after proper logical reasoning has been exhausted (i.e. "last resort"). For example, perhaps after a long yet inconclusive analysis of competing facts, regarding whether your Uncle Jim is telling the truth or not when he claims to have been to Mars (e.g. "Nobody has ever been to Mars", "Uncle Jim showed me his medal from NASA", "Mum says Uncle Jim is a flaming crackpot", "Uncle Jim showed me a really red rock"), your intuition settles the matter with the rule: "You should trust your own family". But, on the other hand, it's also possible that intuition is a more elementary mechanism, and that it can't be expressed in the form of logical rules at all: instead, it could simply be a direct mapping of "situations" to responses.

Is reason enough?

In order to test whether a hypothetical machine, as discussed so far, is "good enough" to be considered intelligent, I'd like to turn to one of the domains that current-generation AI is already pursuing: criminal justice assessment. One particular area of this domain, in which the use of AI has grown significantly, is determining whether an incarcerated person should be approved for parole or not. Unsurprisingly, AI's having input into such a decision has so far, in real life, not been considered altogether successful.

The current AI process for this is based almost entirely on statistical analysis. That is, the main input consists of simple numeric parameters, such as: number of incidents reported during imprisonment; level of severity of the crime originally committed; and level of recurrence of criminal activity. The input also includes numerous profiling parameters regarding the inmate, such as: racial / ethnic group; gender; and age. The algorithm, regardless of any bells and whistles it may claim, is invariably simply answering the question: for other cases with similar input parameters, were they deemed eligible for parole? And if so, did their conduct after release demonstrate that they were "reformed"? And based on that, is this person eligible for parole?

Current-generation AI, in other words, is incapable of considering a single such case based on its own merits, nor of making any meaningful decision regarding that case. All it can do, is compare the current case to its training data set of other cases, and determine how similar the current case is to those others.

A human deciding parole eligibility, on the other hand, does consider the case in question based on its own merits. Sure, a human also considers the numeric parameters and the profiling parameters that a machine can so easily evaluate. But a human also considers each individual event in the inmate's history as a stand-alone fact, and each such fact can affect the final decision differently. For example, perhaps the inmate seriously assaulted other inmates twice while imprisoned. But perhaps he also read 150 novels, and finished a university degree by correspondence. These are not just statistics, they're facts that must be considered, and each fact must refine the hypothesis whose final form is either "this person is eligible for parole", or "this person is not eligible for parole".

A human is also influenced by morals and ethics, when considering the character of another human being. So, although the question being asked is officially: "is this person eligible for parole?", the question being considered in the judge's head may very well actually be: "is this person good or bad?". Should a machine have a concept of ethics, and/or of good vs bad, and should it apply such ethics when considering the character of an individual human? Most academics seem to think so.

According to Kant, ethics is based on a foundation of reason. But that doesn't mean that a reasoning machine is automatically an ethical machine, either. Does AI need to understand ethics, in order to possess what we would consider human-like intelligence?

Although decisions such as parole eligibility are supposed to be objective and rational, a human is also influenced by emotions, when considering the character of another human being. Maybe, despite the evidence suggesting that the inmate is not reformed, the judge is stirred by a feeling of compassion and pity, and this feeling results in parole being granted. Or maybe, despite the evidence being overwhelmingly positive, the judge feels fear and loathing towards the inmate, mainly because of his tough physical appearance, and this feeling results in parole being denied.

Should human-like AI possess the ability to be "stirred" by such emotions? And would it actually be desirable for AI to be affected by such emotions, when evaluating the character of an individual human? Some such emotions might be considered positive, while others might be considered negative (particularly from an ethical point of view).

I think the ultimate test in this domain – perhaps the "Turing test for criminal justice assessment" – would be if AI were able to understand, and to properly evaluate, this great parole speech, which is one of my personal favourite movie quotes:

There's not a day goes by I don't feel regret. Not because I'm in here, or because you think I should. I look back on the way I was then: a young, stupid kid who committed that terrible crime. I want to talk to him. I want to try and talk some sense to him, tell him the way things are. But I can't. That kid's long gone and this old man is all that's left. I got to live with that. Rehabilitated? It's just a bulls**t word. So you can go and stamp your form, Sonny, and stop wasting my time. Because to tell you the truth, I don't give a s**t.

"Red" (Morgan Freeman)

The Shawshank Redemption (1994)

Red's parole hearing
Red's parole hearing
Image source: YouTube

In the movie, Red's parole was granted. Could we ever build an AI that could also grant parole in that case, and for the same reasons? On top of needing the ability to reason with real facts, and to be affected by ethics and by emotion, properly evaluating such a speech requires the ability to understand humour – black humour, no less – along with apathy and cynicism. No small task.

Conclusion

Sorry if you were expecting me to work wonders in this article, and to actually teach the world how to build artificial intelligence that reasons. I don't have the magic answer to that million dollar question. However, I hope I have achieved my aim here, which was to describe what's needed in order for it to even be possible for such AI to come to fruition.

It should be clear, based on what I've discussed here, that most current-generation AI is based on a completely inadequate foundation for even remotely human-like intelligence. Chucking big data at a statistic-crunching algorithm on a fat cluster might be yielding cool and even useful results, but it will never yield intelligent results. As centuries of philosophical debate can teach us – if only we'd stop and listen – human intelligence rests on specific building blocks. These include, at the very least, an intrinsic understanding of time, space, and causality; and the ability to hypothesise based on experience. If we are to ever build a truly intelligent artificial agent, then we're going to have to figure out how to imbue it with these things.

Further reading

]]>
The eccentric tale of Gustave Eiffel and his Tower https://greenash.net.au/thoughts/2018/10/the-eccentric-tale-of-gustave-eiffel-and-his-tower/ Tue, 16 Oct 2018 00:00:00 +0000 https://greenash.net.au/thoughts/2018/10/the-eccentric-tale-of-gustave-eiffel-and-his-tower/ The Eiffel Tower, as it turns out, is far more than just the most iconic tourist attraction in the world. As the tallest structure ever built by man at the time – and holder of the record "tallest man-made structure in the world" for 41 years, following its completion in 1889 – it was a revolutionary feat of structural engineering. It was also highly controversial – deeply unpopular, one might even say – with some of the most prominent Parisians of the day fiercely protesting against its "monstruous" form. And Gustave Eiffel, its creator, was brilliant, ambitious, eccentric, and thick-skinned.

From reading the wonderful epic novel Paris, by Edward Rutherford, I learned some facts about Gustave Eiffel's life, and about the Eiffel Tower's original conception, its construction, and its first few decades as the exclamation mark of the Paris skyline, that both surprised and intrigued me. Allow me to share these tidbits of history in this here humble article.

Gustave Eiffel hanging out with a model of his Tower.
Gustave Eiffel hanging out with a model of his Tower.
Image source: domain.com.au.

To begin with, the Eiffel Tower was not designed by Gustave Eiffel. The original idea and the first drafts of the design were produced by one Maurice Koechlin, who worked at Eiffel's firm. The same is true of Eiffel's other great claim to fame, the Statue of Liberty (which he built just before the Tower): after Eiffel's firm took over the project of building the statue, it was Koechlin who came up with Liberty's ingenious inner iron truss skeleton, and outer copper "skin", that makes her highly wind-resistant in the midst of blustery New York Harbour. It was a similar story for the Garabit Viaduct, and various other projects: although Eiffel himself was a highly capable engineer, it was Koechlin who was the mastermind, while Eiffel was the salesman and the celebrity.

Eiffel, and his colleagues Maurice Koechlin and Émile Nouguier, were engineers, not designers. In particular, they were renowned bridge-builders of their time. As such, their tower design was all about the practicalities of wind resistance, thermal expansion, and material strength; the Tower's aesthetic qualities were secondary considerations, with architect Stephen Sauvestre only being invited to contribute an artistic touch (such as the arches on the Tower's base), after the initial drafts were completed.

Koechlin's first draft of the Eiffel Tower.
Koechlin's first draft of the Eiffel Tower.
Image source: Wikimedia Commons.

The Eiffel Tower was built as the centrepiece of the 1889 Exposition Universelle in Paris, after winning the 1886 competition that was held to find a suitable design. However, after choosing it, the City of Paris then put forward only a small modicum of the estimated money needed to build it, rather than the Tower's full estimated budget. As such, Eiffel agreed to cover the remainder of the construction costs out of his own pocket, but only on the condition that he receive all commercial income from the Tower, for 20 years from the date of its inauguration. This proved to be much to Eiffel's advantage in the long-term, as the Tower's income just during the Exposition Universelle itself – i.e. just during the first six months of its operating life – more than covered Eiffel's out-of-pocket costs; and the Tower has consistently operated at a profit ever since.

Illustration of the Eiffel Tower during the 1889 World's Fair.
Illustration of the Eiffel Tower during the 1889 World's Fair.
Image source: toureiffel.paris.

Pioneering construction projects of the 19th century (and, indeed, of all human history before then too) were, in general, hardly renowned for their occupational safety standards. I had always assumed that the building of the Eiffel Tower, which saw workmen reach more dizzying heights than ever before, had taken no small toll of lives. However, it just so happens that Gustave Eiffel was more than a mere engineer and a bourgeois, he was also a pioneer of safety: thanks to his insistence on the use of devices such as guard rails and movable stagings, the Eiffel Tower project amazingly saw only one fatality; and it wasn't even really a workplace accident, as the deceased, a workman named Angelo Scagliotti, climbed the tower while off duty, to impress his girlfriend, and sadly lost his footing.

The Tower's three levels, and its lifts and staircases, have always been accessible to the general public. However, something that not all visitors to the Tower may be aware of, is that near the summit of the Tower, just above the third level's viewing platform, sits what was originally Gustave Eiffel's private apartment. For the 20 years that he owned the rights to the Tower, Eiffel also enjoyed his own bachelor pad at the top! Eiffel reportedly received numerous requests to rent out the pad for a night, but he never did so, instead only inviting distinguished guests of his choosing, such as (no less than) Thomas Edison. The apartment is now open to the public as a museum. Still no word regarding when it will be listed on Airbnb; although another private apartment was more recently added lower down in the Tower and was rented out.

Eiffel's modest abode atop the world, as it looks today.
Eiffel's modest abode atop the world, as it looks today.
Image source: The Independent.

So why did Eiffel's contract for the rights to the Tower stipulate 20 years? Because the plan was, that after gracing the Paris cityscape for that many years, it was to be torn down! That's right, the Eiffel Tower – which today seems like such an invincible monument – was only ever meant to be a temporary structure. And what saved it? Was it that the City Government came to realise what a tremendous cash cow it could inherit? Was it that Parisians came to love and to admire what they had considered to be a detestable blight upon their elegant city? Not at all! The only thing that saved the Eiffel Tower was that, a few years prior to its scheduled doomsday, a little thing known as radio had been invented. The French military, who had started using the Tower as a radio antenna – realising that it was the best antenna in all of Paris, if not the world at that time – promptly declared the Tower vital to the defence of Paris, thus staving off the wrecking ball.

The big crew gathered at the base of the Tower, Jul 1888.
The big crew gathered at the base of the Tower, Jul 1888.
Image source: busy.org.

And the rest, as they say, is history. There are plenty more intriguing anecdotes about the Eiffel Tower, if you're interested in delving further. The Tower continued to have a colourful life, after the City of Paris relieved Eiffel of his rights to it in 1909, and after his death in 1923; and the story continues to this day. So, next time you have the good fortune of visiting La belle Paris, remember that there's much more to her tallest monument than just a fine view from the top.

]]>
Twelve ASX stocks with record growth since 2000 https://greenash.net.au/thoughts/2018/05/twelve-asx-stocks-with-record-growth-since-2000/ Tue, 29 May 2018 00:00:00 +0000 https://greenash.net.au/thoughts/2018/05/twelve-asx-stocks-with-record-growth-since-2000/ I recently built a little web app called What If Stocks, to answer the question: based on a start and end date, and a pool of stocks and historical prices, what would have been the best stocks to invest in? This app isn't rocket science, it just ranks the stocks based on one simple metric: change in price during the selected period.

I imported into this app, price data from 2000 to 2018, for all ASX (Australian Securities Exchange) stocks that have existed for roughly the whole of that period. I then examined the results, for all possible 5-year and 10-year periods within that date range. I'd therefore like to share with you, what this app calculated to be the 12 Aussie stocks that have ranked No. 1, in terms of market price increase, for one or more of those periods.

1. Incitec Pivot (ASX:IPL)

No. 1 growth stock 2005-2015 ($0.0006 - $3.57, +595,000%), and 2006-2016

If you've never heard of this company before, don't worry, neither had I. Incitec Pivot is a fertiliser and explosives chemicals production company. It's the largest fertiliser manufacturer in Australia, and the second-largest explosives chemicals manufacturer in the world. It began in the early 2000s as the merger of former companies Incitec Fertilizers and the Pivot Group.

Incitec Pivot was a very cheaply priced stock for its first few years on the ASX, 2003-2005. Then, between 2005 and 2008, its value rocketed up as it acquired numerous other companies, and significantly expanded its manufacturing operations. So, in terms of a 5-year or 10-year return, it was a fabulous stock to invest in throughout the 2003-2007 period. However, its growth has been mediocre or poor since 2008.

2. Monadelphous Group (ASX:MND)

No. 1 growth stock 2000-2010 ($0.0119 - $7.19, +60,000%)

Monadelphous is a mining infrastructure (and other industrial infrastructure) construction company based in Perth. They build, expand, and manage big installations such as LNG plants, iron ore ports, oil pipelines, and water treatment plants, in Northern Australia and elsewhere.

By the volatile standards of the mining industry (which it's basically a part of), Monadelphous has experienced remarkably consistent growth. In particular, it enjoyed almost constant growth from 2000 to 2013, which means that, in terms of a 5-year or 10-year return, it was an excellent stock to invest in throughout the 2000-2007 period. Monadelphous is somewhat vulnerable to mining crashes, although it recovered well after the 2008 GFC. However, its growth has been mediocre or poor for much of the time since 2013.

3. Fortescue Metals Group (ASX:FMG)

No. 1 growth stock 2001-2011 ($0.0074 - $4.05, +55,000%), and 2002-2012, and 2003-2013

Fortescue is one of the world's largest iron ore producers. Started in the early 2000s as a tiny company, in the hands of Andrew Forrest (now one of Australia's richest people) it has grown to rival the long-time iron ore giants BHP and Rio Tinto. Fortescue owns and operates some of the world's largest iron ore mines, in the Pilbara region of Western Australia.

Fortescue was a small company and a low-value stock until 2006, when its share price shot up. Apart from a massive spike in 2008 (before the GFC), and various other high times since then, its price has remained relatively flat since then. So, in terms of a 5-year or 10-year return, it was an excellent investment throughout the 2000-2007 period. However, its growth has been mediocre or poor since 2008.

4. CTI Logistics (ASX:CLX)

No. 1 growth stock 2004-2014 ($0.0213 - $1.46, +6,800%)

CTI is a freight and heavy hauling company based in Perth. It does a fair chunk of its business hauling and storing materials for the mining industry. However, it also operates a strong consumer parcel delivery service.

CTI experienced its price surge almost entirely during 2005 and 2006. Since then, its price has been fairly stable, except that it rose somewhat during the 2010-2013 mining boom, and then fell back to its old levels during the 2014-2017 mining crash. In terms of a 5-year or 10-year return, it was a good investment throughout the 2000-2011 period.

5. Credit Corp Group (ASX:CCP)

No. 1 growth stock 2008-2018 ($0.59 - $19.52, +3,200%)

Credit Corp Group is a debt collection company. As that description suggests, and as some quick googling confirms, they're the kind of company you do not want to have dealings with. They are apparently one of those companies that hounds people who have unpaid phone bills, credit card bills, and the like.

Anyway, getting indebted persons to pay up (with interest, of course) is apparently a business that pays off, because Credit Corp has shown consistent growth for the entire period being analysed here. In terms of a 5-year or 10-year return, it was a solid investment for most of 2000-2018 (and it appears to still be on a growth trajectory), although it yielded not so great returns for those buying in 2003-2007. All up, one of the strongest growth stocks in this list.

6. Ainsworth Game Technology (ASX:AGI)

No. 1 growth stock 2008-2013 ($0.11 - $3.34, +2,800%), and 2009-2014

Ainsworth Game Technology is a poker machine (aka slot machine) manufacturing company. It's based in Sydney, where it no doubt enjoys plenty of business, NSW being home to half of all pokies in Australia, and to the second-largest number of pokies in the world, beaten only by Las Vegas.

Ainsworth stocks experienced fairly flat long-term growth during 2000-2011, but then in 2012 and 2013 the price rose significantly. They have been back on a downhill slide since then, but remain strong by historical standards. In terms of a 5-year or 10-year return, it was a good investment throughout 2003-2011, a good chunk of the period being analysed.

7. Copper Strike (ASX:CSE)

No. 1 growth stock 2010-2015 ($0.0095 - $0.23, +$2,300%)

Copper Strike is a mining company. It appears that in the past, it operated mines of its own (copper mines, as its name suggests). However, the only significant thing that it currently does, is make money as a large shareholder of another ASX-listed mining company, Syrah Resources (ASX:SYR), which Copper Strike spun off from itself in 2007, and whose principal activity is a graphite mine in Mozambique.

Copper Strike has experienced quite consistent strong growth since 2010. In terms of a 5-year or 10-year return, it has been a quality investment since 2004 (which is when it was founded and first listed). However, its relatively tiny market cap, plus the fact that it seems to lack any core business activity of its own, makes it a particularly risky investment for the future.

8. Domino's Pizza Enterprises (ASX:DMP)

No. 1 growth stock 2007-2017 ($2.13 - $50.63, +2,280%)

The only company on this list that absolutely everybody should have heard of, Domino's is Australia's largest pizza chain, and Australia is also home to the biggest market for Domino's in the world. Founded in Australia in 1983, Domino's has been a listed ASX company since 2005.

Domino's has been on a non-stop roller-coaster ride of growth and profit, ever since it first listed in 2005. In terms of a 5-year or 10-year return, it has been a fabulous investment since then, more-or-less up to the present day. However, the stock price of Domino's has been dealt a blow for the past two years or so, in the face of reported weak profits, and claims of widespread underpayment of its employees.

9. Vita Group (ASX:VTG)

No. 1 growth stock 2011-2016 ($0.14 - $3.12, +$2,100%)

Vita Group is the not-so-well-known name of a well-known Aussie brand, the mobile phone retail chain Fone Zone. Although these days, there are only a few Fone Zone branded stores left, and Vita's main business consists of the 100 or so Telstra retail outlets that it owns and manages across the country.

Vita's share price rose to a great peak in 2016, and then fell. In terms of overall performance since it was listed in 2005, Vita's growth has been fairly flat. In terms of a 5-year or 10-year return, it has been a decent investment throughout 2005-2013. Vita may experience strong growth again in future, but it appears more likely to be a low-growth stable investment (at best) from here on.

10. Red River Resources (ASX:RVR)

No. 1 growth stock 2013-2018 ($0.0153 - $0.31, +1,900%)

Red River is a zinc mining company. Its main operation is the Thalanga mine in northern Queensland.

Red River is one of the most volatile stocks in this list. Its price has gone up and down on many occasions. In terms of a 5-year or 10-year return, it was a good investment for 2011-2013, but it was a dud for investment for 2005-2010.

11. Pro Medicus (ASX:PME)

No. 1 growth stock 2012-2017 ($0.29 - $5.81, +1,870%)

Pro Medicus is a medical imaging software development company. Its flagship product, called Visage, provides a full suite of desktop and server software for use in radiology. Pro Medicus software is used by a number of health care providers in Australia, the US, and elsewhere.

Pro Medicus has been quite a modest stock for most of its history, reporting virtually flat price growth for a long time. However, since 2015 its price has rocketed up, and it's currently riding a big high. This has apparently been due to the company winning several big contracts, particularly with clinics in the US. It looks on track to continue delivering solid growth.

12. Macquarie Telecom Group (ASX:MAQ)

No. 1 growth stock 2007-2012 ($0.71 - $7.01, +885%)

Macquarie Telecom Group is an enterprise telecommunications and data hosting services company. It provides connectivity services and data centres for various Australian government departments, educational institutions, and medium-to-large businesses.

Macquarie Telecom's share price crashed quite dramatically after the dot-com boom around 2000, and didn't really recover again until after the GFC in 2009. It has been riding the cloud boom for some time now, and it appears to be stable in the long-term. In terms of a 5-year or 10-year return, its viability as a good investment has been patchy throughout the past two decades, with some years faring better than others.

Winners or duds?

How good an investment each of these stocks actually was or is, is a far more complex question than what I'm presenting here. But, for what it's worth, what you have here are 12 stocks which, if you happened to buy and sell any of them at exactly the right time in recent history, would have yielded more bang for your buck than any other stocks over the same period. Given the benefit of hindsight (which is always such a wonderful thing, isn't it?), I thought it would be a fun little exercise to identify the stocks that were winners, based on this most dead-simple of all measures.

The most interesting conclusion that I'd like to draw from this list, is what a surprisingly diverse range of industries it encompasses. There is, of course, an over-representation from mining and commodities (the "wild west" of boom-and-bust companies), with six of the stocks (half of the list) more-or-less being from that sector (although only three are actual mining companies – the others are in: chemical processing; mining infrastructure; and mining transport). However, the other six stocks are quite a mixed bag: finance; gambling; fast food; tech retail; health tech; and telco.

What can we learn from this list, about the types of companies that experience massive stock price growth? Well, to rule out one factor: they can be in any industry. Price surges can be attributed to a range of factors, but I'd say that, for the companies in this list, the most common factor has been the securing of new contracts and of new sales pipelines. For some, it has been all about the value of a particular item of goods or services soaring on the global market at a fortuitous moment. And for others, it has simply been a matter of solid management and consistently good service driving the value of the company up and up over a sustained period.

Some of these companies are considered to be actual winners, i.e. they're companies that the experts have identified, on various occasions, as good investments, for more reasons than just the market price growth that I've pointed out here. Other companies in this list are effectively duds, i.e. experts have generally cast doom and gloom upon them, or have barely bothered to analyse them at all.

I hope you enjoyed this run-down of Aussie stocks that, according to my number-crunching, could have been your cash cows, if only you had been armed with this crystal ball back in the day. In future, I'm hoping to improve What If Stocks to provide more insights, and I'm also hoping to analyse stocks in various other markets other than on the ASX.

Acknowledgement: all price data used in this analysis has been sourced from the Alpha Vantage API. All analysis is based on adjusted close prices, i.e. historical prices adjusted to reflect all corporate actions (stock splits, mergers, and so forth) that have occurred between the historical date and the current date.

Disclaimer: this article does not contain or constitute investment advice in any way. The author of this article has neither any qualifications nor any experience in finance or investment. The author has no position in any of the stocks mentioned, nor does the author endorse any of the stocks mentioned.

]]>
DNA: the most chaotic, most illegible, most mature, most brilliant codebase ever https://greenash.net.au/thoughts/2018/04/dna-the-most-chaotic-most-illegible-most-mature-most-brilliant-codebase-ever/ Sat, 21 Apr 2018 00:00:00 +0000 https://greenash.net.au/thoughts/2018/04/dna-the-most-chaotic-most-illegible-most-mature-most-brilliant-codebase-ever/ As a computer programmer – i.e. as someone whose day job is to write relatively dumb, straight-forward code, that controls relatively dumb, straight-forward machines – DNA is a fascinating thing. Other coders agree. It has been called the code of life, and rightly so: the DNA that makes up a given organism's genome, is the set of instructions responsible for virtually everything about how that organism grows, survives, behaves, reproduces, and ultimately dies in this universe.

Most intriguing and most tantalising of all, is the fact that we humans still have virtually no idea how to interpret DNA in any meaningful way. It's only since 1953 that we've understood what DNA even is; and it's only since 2001 that we've been able to extract and to gaze upon instances of the complete human genome.

Watson and Crick showing off their DNA model in 1953.
Watson and Crick showing off their DNA model in 1953.
Image source: A complete PPT on DNA (Slideshare).

As others have pointed out, the reason why we haven't had much luck in reading DNA, is because (in computer science parlance) it's not high-level source code, it's machine code (or, to be more precise, it's bytecode). So, DNA, which is sequences of base-4 digits, grouped into (most commonly) 3-digit "words" (known as "codons"), is no more easily decipherable than binary, which is sequences of base-2 digits, grouped into (for example) 8-digit "words" (known as "bytes"). And as anyone who has ever read or written binary (in binary, octal, or hex form, however you want to skin that cat) can attest, it's hard!

In this musing, I'm going to compare genetic code and computer code. I am in no way qualified to write about this topic (particularly about the biology side), but it's fun, and I'm reckless, and this is my blog so for better or for worse nobody can stop me.

Authorship and motive

The first key difference that I'd like to point out between the two, is regarding who wrote each one, and why. For computer code, this is quite straightforward: a given computer program was written by one of your contemporary human peers (hopefully one who is still alive, as you can then ask him or her about anything that's hard to grok in the code), for some specific and obvious purpose – for example, to add two numbers together, or to move a chomping yellow pac-man around inside a maze, or to add somersaulting cats to an image.

For DNA, we don't know who, if anyone, wrote the first ever snippet of code – maybe it was G-d, maybe it was aliens from the Delta Quadrant, or maybe it was the random result of various chemicals bashing into each other within the primordial soup. And as for who wrote (and who continues to this day to write) all DNA after that, that too may well be The Almighty or The Borg, but the current theory of choice is that a given snippet of DNA basically keeps on re-writing itself, and that this auto-re-writing happens (as far as we can tell) in a pseudo-random fashion.

This guy didn't write binary or DNA, I'm pretty sure.
This guy didn't write binary or DNA, I'm pretty sure.
Image source: Art UK.

Nor do we know why DNA came about in the first place. From a philosophical / logical point of view, not having an answer to the "who" question, kind of makes it impossible to address the "why", by defintion. If it came into existence randomly, then it would logically follow that it wasn't created for any specific purpose, either. And as for why DNA re-writes itself in the way that it does: it would seem that DNA's, and therefore life's, main purpose, as far as the DNA itself is concerned, is simply to continue existing / surviving, as evidenced by the fact that DNA's self-modification results, on average, over the long-term, in it becoming ever more optimally adapted to its surrounding environment.

Management processes

For building and maintaining computer software, regardless of "methodology" (e.g. waterfall, scrum, extreme programming), the vast majority of the time there are a number of common non-dev processes in place. Apart from every geek's favourite bit, a.k.a. "coding", there is (to name a few): requirements gathering; spec writing; code review; testing / QA; version control; release management; staged deployment; and documentation. The whole point of these processes, is to ensure: that a given snippet of code achieves a clear business or technical outcome; that it works as intended (both in isolation, and when integrated into the larger system); that the change it introduces is clearly tracked and is well-communicated; and that the codebase stays maintainable.

For DNA, there is little or no parallel to most of the above processes. As far as we know, when DNA code is modified, there are no requirements defined, there is no spec, there is no review of the change, there is no staging environment, and there is no documentation. DNA seems to follow my former boss's preferred methodology: JFDI. New code is written, nobody knows what it's for, nobody knows how to use it. Oh well. Straight into production it goes.

However, there is one process that DNA demonstrates in abundance: QA. Through a variety of mechanisms, the most important of which is repair enzymes, a given piece of DNA code is constantly checked for integrity errors, and these errors are generally repaired. Mutations (i.e. code changes) can occur during replication due to imperfect copying, or at any other time due to environmental factors. Depending on the genome (i.e. the species) in question, and depending on the gene in question, the level of enforcement of DNA integrity can vary, from "very relaxed" to "very strict". For example, bacteria experience far more mutation between generations than humans do. This is because some genomes consider themselves to still be in "beta", and are quite open to potentially dangerous experimentation, while other genomes consider themselves "mature", and so prefer less change and greater stability. Thus a balance is achieved between preservation of genes, and evolution.

The coding process

For computer software, the actual process of coding is relatively structured and rational. The programmer refers to the spec – which could be anything from a one-sentence verbal instruction bandied over the water-cooler, to a 50-page PDF (preferably it's something in between those two extremes) – before putting hands to keyboard, and also regularly while coding.

The programmer visualises the rough overall code change involved (or the rough overall components of a new codebase), and starts writing. He or she will generally switch between top-down (focusing on the framework and on "glue code") and bottom-up (focusing on individual functions) several times. The code will generally be refined, in response to feedback during code review, to fixing defects in the change, and to the programmer's constant critiquing of his or her own work. Finally, the code will be "done" – although inevitably it will need to be modified in future, in response to new requirements, at which point it's time to rinse and repeat all of the above.

For DNA, on the other hand, the process of coding appears (unless we're missing something?) to be akin to letting a dog randomly roll around on the keyboard while the editor window is open, then cleaning up the worst of the damage, then seeing if anything interesting was produced. Not the most scientific of methods, you might say? But hey, that's science! And it would seem that, amazingly, if you do that on a massively distributed enough scale, over a long enough period of time, you get intelligent life.

DNA modification in progress.
DNA modification in progress.
Image source: DogsToday.

When you think about it, that approach isn't really dissimilar to the current state-of-the-art in machine learning. Getting anything approaching significant or accurate results with machine learning models, has only been possible quite recently, thanks to the availability of massive data sets, and of massive hardware platforms – and even when you let a ML algorithm loose in that environment for a decent period of time, it produces results that contain a lot of noise. So maybe we are indeed onto something with our current approach to ML, although I don't think we're quite onto the generation of truly intelligent software just yet.

Grokking it

Most computer code that has been written by humans for the past 40 years or so, has been high-level source code (i.e. "C and up"). It's written primarily to express business logic, rather than to tell the Von Neumann machine (a.k.a. the computer hardware) exactly what to do. It's up to the compiler / interpreter, to translate that "call function abc" / "divide variable pqr by 50" / "write the string I feel like a Tooheys to file xyz" code, into "load value of register 123" / "put that value in register 456" / "send value to bus 789" code, which in turn actually gets represented in memory as 0s and 1s.

This is great for us humans, because – assuming we can get our hands on the high-level source code – we can quite easily grok the purpose of a given piece of code, without having to examine the gory details of what the computer physically does, step-by-tiny-tedious-step, in order to achieve that noble purpose.

DNA, as I said earlier, is not high-level source code, it's machine code / bytecode (more likely the latter, in which case the actual machine code of living organisms is the proteins, and other things, that DNA / RNA gets "compiled" to). And it now seems pretty clear that there is no higher source code – DNA, which consists of long sequences of Gs, As, Cs, and Ts, is the actual source. The code did not start in a form where a given gene is expressed logically / procedurally – a form from which it could be translated down to base pairs. The start and the end state of the code is as base pairs.

A code that was cracked - can the same be done for DNA?
A code that was cracked - can the same be done for DNA?
Image source: The University Network.

It also seems that DNA is harder to understand than machine / assembly code for a computer, because an organic cell is a much more complex piece of hardware than a Von Neumann-based computer (which itself is a specific type of Turing machine). That's why humans were perfectly capable of programming computers using only machine / assembly code to begin with, and why some specialised programmers continue primarily coding at that level to this day. For a computer, the machine itself only consists of a few simple components, and the instruction set is relatively small and unambiguous. For an organic cell, the physical machinery is far more complex (and whether a DNA-containing cell is a Turing machine is itself currently an open research question), and the instruction set is riddled with ambiguous, context-specific meanings.

Since all we have is the DNA bytecode, all current efforts to "decode DNA" focus on comparing long strings of raw base pairs with each other, across different genes / chromosomes / genomes. This is akin to trying to understand what software does by lining up long strings of compiled hex digits for different binaries side-by-side, and spotting sequences that are kind-of similar. So, no offense intended, but the current state-of-the-art in "DNA decoding" strikes me as incredibly primitive, cumbersome, and futile. It's a miracle that we've made any progress at all with this approach, and it's only thanks to some highly intelligent people employing their best mathematical pattern analysis techniques, that we have indeed gotten anywhere.

Where to from here?

Personally, I feel that we're only really going to "crack" the DNA puzzle, if we're able to reverse-engineer raw DNA sequences into some sort of higher-level code. And, considering that reverse-engineering raw binary into a higher-level programming language (such as C) is a very difficult endeavour, and that doing the same for DNA is bound to be even harder, I think we have our work cut out for us.

My interest in the DNA puzzle was first piqued, when I heard a talk at PyCon AU 2016: Big data biology for pythonistas: getting in on the genomics revolution, presented by Darya Vanichkina. In this presentation, DNA was presented as a riddle that more programmers can and should try to help solve. Since then, I've thought about the riddle now and then, and I have occasionally read some of the plethora of available online material about DNA and genome sequencing.

DNA is an amazing thing: for approximately 4 billion years, it has been spreading itself across our planet, modifying itself in bizarre and colourful ways, and ultimately evolving (according to the laws of natural selection) to become the codebase that defines the behaviour of primitive lifeforms such as humans (and even intelligent lifeforms such as dolphins!).

Dolphins! (Brainier than you are).
Dolphins! (Brainier than you are).
Image source: Days of the Year.

So, let's be realistic here: it took DNA that long to reach its current form; we'll be doing well if we can start to understand it properly within the next 1,000 years, if we can manage it at all before the humble blip on Earth's timeline that is human civilisation fades altogether.

]]>
Mobile phone IMEI whitelisting in Chile and elsewhere https://greenash.net.au/thoughts/2017/12/mobile-phone-imei-whitelisting-in-chile-and-elsewhere/ Mon, 11 Dec 2017 00:00:00 +0000 https://greenash.net.au/thoughts/2017/12/mobile-phone-imei-whitelisting-in-chile-and-elsewhere/ Shortly after arriving in Chile recently, I was dismayed to discover that – due to a new law – my Aussie mobile phone would not work with a local prepaid Chilean SIM card, without me first completing a tedious bureaucratic exercise. So, whereas getting connected with a local mobile number was previously something that took about 10 minutes to achieve in Chile, it's now an endeavour that took me about 2 weeks to achieve.

It turns out that Chile has joined a small group of countries around the world, that have decided to implement a national IMEI (International Mobile Equipment Identity) whitelist. From some quick investigation, as far as I can tell, the only other countries that boast such a system are Turkey, Azerbaijan, Colombia, and Nepal. Hardly the most venerable group of nations to be joining, in my opinion.

As someone who has been to Chile many times, all I can say is: not happy! Bringing your own mobile device, and purchasing a local SIM card, is the cheapest and the most convenient way to stay connected while travelling, and it's the go-to method for a great many tourists travelling all around the world. It beats international roaming hands-down, and it eliminates the unnecessary cost of purchasing a new local phone all the time. I really hope that the Chilean government reconsiders the need for this law, and I really hope that no more countries join this misguided bandwagon.

Getting your IMEI whitelisted in Chile can be a painful process.
Getting your IMEI whitelisted in Chile can be a painful process.
Image source: imgflip.

IMEI what?

In case you've never heard of an IMEI, and have no idea what it is, basically it's a unique identification code for all mobile phones worldwide. Historically, there have been no restrictions on the use of mobile handsets, in particular countries or worldwide, according to the device's IMEI code. On the contrary, what's much more common than the network blocking a device, is for the device itself to block access to all networks except one, due to it needing to be unlocked.

A number of countries have implemented IMEI blacklists, mainly in order to render stolen mobile devices useless – these countries include Australia, New Zealand, the UK, and numerous others. In my opinion, a blacklist is a much better solution than a whitelist in this domain, because it only places an administrative burden on problem devices, while all other devices on the network are presumed to be valid and authorised to connect.

Ostensibly, Chile has passed this law primarily to ensure that all mobiles imported and sold locally by retailers are capable of receiving emergency alerts (i.e. for natural disasters, particularly earthquakes and tsunamis), using a special system called SAE (Sistema de Alerta de Emergencias). In actual fact, the system isn't all that special – it's standard Cell Broadcast (CB) messaging, the vast majority of smartphones worldwide already support it, and registering IMEIs is not in any way required for it to function.

For example, the same technology is used for the Wireless Emergency Alerts (WEA) system in the USA, and for the Earthquake Early Warning (EEW) system in Japan, and neither of those countries have an IMEI whitelist in place. Chile uses channel 919 for its SAE broadcasts, which appears to be a standard emergency channel that's already used by The Netherlands and Israel (again, countries without an IMEI whitelist), and possibly also by other countries.

The new law is supposedly also designed to ensure (quoting the original Spanish text):

…que todos los equipos comercializados en el país informen con certeza al usuario si funcionarán o no en las diferentes localidades a lo largo de Chile, según las tecnologías que se comercialicen en cada zona geográfica.

Which translates to English as:

…that all commercially available devices in the country provide a guarantee to the user of whether they will function or not in all of the various regions throughout Chile, according to the technologies that are deployed within each geographical area.

However, that same article suggests that this is a solution to a problem that doesn't currently exist – i.e. that the vast majority of devices already work nationwide, and that Chile's mobile network technology is already sufficiently standardised nationwide. It also suggests that the real reason why this law was introduced, was simply in order to stifle the smaller, independent mobile retailers and service providers with a costly administrative burden, and thus to entrench the monopoly of Chile's big mobile companies (and there are approximately three big players on the scene).

So, it seems that the Chilean government created this law thinking purely of its domestic implications – and even those reasons are unsound, and appear to be mired in vested interests. And, as far as I can see, no thought at all was given to the gross inconvenience that this law was bound to cause, and that it is currently causing, to tourists. The fact that the Movistar Chile IMEI registration online form (which I used) requires that you enter a RUT (a Chilean national ID number, which most tourists including myself lack), exemplifies this utter obliviousness of the authorities and of the big telco companies, regarding visitors to Chile perhaps wanting to use their mobile devices as they see fit.

In summary

Chile is now one of a handful of countries where, as a tourist, despite bringing from home a perfectly good cellular device (i.e. one that's unlocked and that functions with all the relevant protocols and frequencies), the pig-headed bureaucracy of an IMEI whitelist makes the device unusable. So, my advice to anyone who plans to visit Chile and to purchase a local SIM card for your non-Chilean mobile phone: register your phone's IMEI well in advance of your trip, with one of the Chilean companies licensed to "certify" it (the procedure is at least free, for one device per person per year, and can be done online), and thus avoid the inconvenience of your device not working upon arrival in the country.

]]>
A lightweight per-transaction Python function queue for Flask https://greenash.net.au/thoughts/2017/12/a-lightweight-per-transaction-python-function-queue-for-flask/ Mon, 04 Dec 2017 00:00:00 +0000 https://greenash.net.au/thoughts/2017/12/a-lightweight-per-transaction-python-function-queue-for-flask/ The premise: each time a certain API method is called within a Flask / SQLAlchemy app (a method that primarily involves saving something to the database), send various notifications, e.g. log to the standard logger, and send an email to site admins. However, the way the API works, is that several different methods can be forced to run in a single DB transaction, by specifying that SQLAlchemy only perform a commit when the last method is called. Ideally, no notifications should actually get triggered until the DB transaction has been successfully committed; and when the commit has finished, the notifications should trigger in the order that the API methods were called.

There are various possible solutions that can accomplish this, for example: a celery task queue, an event scheduler, and a synchronised / threaded queue. However, those are all fairly heavy solutions to this problem, because we only need a queue that runs inside one thread, and that lives for the duration of a single DB transaction (and therefore also only for a single request).

To solve this problem, I implemented a very lightweight function queue, where each queue is a deque instance, that lives inside flask.g, and that is therefore available for the duration of a given request context (or app context).

The code

The whole implementation really just consists of this one function:

from collections import deque

from flask import g


def queue_and_delayed_execute(
        queue_key, session_hash, func_to_enqueue,
        func_to_enqueue_ctx=None, is_time_to_execute_funcs=False):
    """Add a function to a queue, then execute the funcs now or later.

    Creates a unique deque() queue for each queue_key / session_hash
    combination, and stores the queue in flask.g. The idea is that
    queue_key is some meaningful identifier for the functions in the
    queue (e.g. 'banana_masher_queue'), and that session_hash is some
    identifier that's guaranteed to be unique, in the case of there
    being multiple queues for the same queue_key at the same time (e.g.
    if there's a one-to-one mapping between a queue and a SQLAlchemy
    transaction, then hash(db.session) is a suitable value to pass in
    for session_hash).

    Since flask.g only stores data for the lifetime of the current
    request (or for the lifetime of the current app context, if not
    running in a request context), this function should only be used for
    a queue of functions that's guaranteed to only be built up and
    executed within a single request (e.g. within a single DB
    transaction).

    Adds func_to_enqueue to the queue (and passes func_to_enqueue_ctx as
    kwargs if it has been provided). If is_time_to_execute_funcs is
    True (e.g. if a DB transaction has just been committed), then takes
    each function out of the queue in FIFO order, and executes the
    function.
    """
    # Initialise the set of queues for queue_key
    if queue_key not in g:
        setattr(g, queue_key, {})

    # Initialise the unique queue for the specified session_hash
    func_queues = getattr(g, queue_key)
    if session_hash not in func_queues:
        func_queues[session_hash] = deque()

    func_queue = func_queues[session_hash]

    # Add the passed-in function and its context values to the queue
    func_queue.append((func_to_enqueue, func_to_enqueue_ctx))

    if is_time_to_execute_funcs:
        # Take each function out of the queue and execute it
        while func_queue:
            func_to_execute, func_to_execute_ctx = (
                func_queue.popleft())
            func_ctx = (
                func_to_execute_ctx
                if func_to_execute_ctx is not None
                else {})
            func_to_execute(**func_ctx)

        # The queue is now empty, so clean up by deleting the queue
        # object from flask.g
        del func_queues[session_hash]

To use the function queue, calling code should look something like this:

from flask import current_app as app
from flask_mail import Message
from sqlalchemy.exc import SQLAlchemyError

from myapp.extensions import db, mail


def do_api_log_msg(log_msg):
    """Log the specified message to the app logger."""
    app.logger.info(log_msg)


def do_api_notify_email(mail_subject, mail_body):
    """Send the specified notification email to site admins."""
    msg = Message(
        mail_subject,
        sender=app.config['MAIL_DEFAULT_SENDER'],
        recipients=app.config['CONTACT_EMAIL_RECIPIENTS'])
    msg.body = mail_body

    mail.send(msg)

    # Added for demonstration purposes, not really needed in production
    app.logger.info('Sent email: {0}'.format(mail_subject))


def finalise_api_op(
        log_msg=None, mail_subject=None, mail_body=None,
        is_db_session_commit=False, is_app_logger=False,
        is_send_notify_email=False):
    """Finalise an API operation by committing and logging."""
    # Get a unique identifier for this DB transaction
    session_hash = hash(db.session)

    if is_db_session_commit:
        try:
            db.session.commit()

            # Added for demonstration purposes, not really needed in
            # production
            app.logger.info('Committed DB transaction')
        except SQLAlchemyError as exc:
            db.session.rollback()
            return {'error': 'error finalising api op'}

    if is_app_logger:
        queue_key = 'api_log_msg_queue'

        func_to_enqueue_ctx = dict(log_msg=log_msg)

        queue_and_delayed_execute(
            queue_key=queue_key, session_hash=session_hash,
            func_to_enqueue=do_api_log_msg,
            func_to_enqueue_ctx=func_to_enqueue_ctx,
            is_time_to_execute_funcs=is_db_session_commit)

    if is_send_notify_email:
        queue_key = 'api_notify_email_queue'

        func_to_enqueue_ctx = dict(
            mail_subject=mail_subject, mail_body=mail_body)

        queue_and_delayed_execute(
            queue_key=queue_key, session_hash=session_hash,
            func_to_enqueue=do_api_notify_email,
            func_to_enqueue_ctx=func_to_enqueue_ctx,
            is_time_to_execute_funcs=is_db_session_commit)

    return {'message': 'api op finalised ok'}

And that code can be called from a bunch of API methods like so:

def update_froggy_colour(
        froggy, colour, is_db_session_commit=False, is_app_logger=False,
        is_send_notify_email=False):
    """Update a froggy's colour."""
    froggy.colour = colour

    db.session.add(froggy)

    log_msg = ((
        'Froggy colour updated: {froggy.id}; new value: '
        '{froggy.colour}').format(froggy=froggy))
    mail_body = (
        'Froggy: {froggy.id}; new colour: {froggy.colour}'.format(
            froggy=froggy))

    result = finalise_api_op(
        log_msg=log_msg, mail_subject='Froggy colour updated',
        mail_body=mail_body, is_db_session_commit=is_db_session_commit,
        is_app_logger=is_app_logger,
        is_send_notify_email=is_send_notify_email)

    return result


def make_froggy_jump(
        froggy, jump_height, is_db_session_commit=False,
        is_app_logger=False, is_send_notify_email=False):
    """Make a froggy jump."""
    froggy.is_jumping = True
    froggy.jump_height = jump_height

    db.session.add(froggy)

    log_msg = ((
        'Made froggy jump: {froggy.id}; jump height: '
        '{froggy.jump_height}').format(froggy=froggy))
    mail_body = (
        'Froggy: {froggy.id}; jump height: {froggy.jump_height}'.format(
            froggy=froggy))

    result = finalise_api_op(
        log_msg=log_msg, mail_subject='Made froggy jump',
        mail_body=mail_body, is_db_session_commit=is_db_session_commit,
        is_app_logger=is_app_logger,
        is_send_notify_email=is_send_notify_email)

    return result

And the API methods can be called like so:

def make_froggy_brightpink_and_highjump(froggy):
    """Make a froggy bright pink and jumping high."""
    results = []

    result1 = update_froggy_colour(
        froggy, "bright_pink", is_app_logger=True)
    results.append(result1)

    result2 = make_froggy_jump(
        froggy, "50 metres", is_db_session_commit=True,
        is_app_logger=True, is_send_notify_email=True)
    results.append(result2)

    return results

If make_froggy_brightpink_and_highjump() is called from within a Flask app context, the app's log should include output that looks something like this:

INFO [2017-12-01 09:00:00] Committed DB transaction
INFO [2017-12-01 09:00:00] Froggy colour updated: 123; new value: bright_pink
INFO [2017-12-01 09:00:00] Made froggy jump: 123; jump height: 50 metres
INFO [2017-12-01 09:00:00] Sent email: Made froggy jump

The log output demonstrates that the desired behaviour has been achieved: first, the DB transaction finishes (i.e. the froggy actually gets set to bright pink, and made to jump high, in one atomic write operation); then, the API actions are logged in the order that they were called (first the colour was updated, then the froggy was made to jump); then, email notifications are sent in order (in this case, we only want an email notification sent for when the froggy jumps high – but if we had also asked for an email notification for when the froggy's colour was changed, that would have been the first email sent).

In summary

That's about all there is to this "task queue" implementation – as I said, it's very lightweight, because it only needs to be simple and short-lived. I'm sharing this solution, mainly to serve as a reminder that you shouldn't just use your standard hammer, because sometimes the hammer is disproportionately big compared to the nail. In this case, the solution doesn't need an asynchronous queue, it doesn't need a scheduled queue, and it doesn't need a threaded queue. (Although moving the email sending off to a celery task is a good idea in production; and moving the logging to celery would be warranted too, if it was logging to a third-party service rather than just to a local file.) It just needs a queue that builds up and that then gets processed, for a single DB transaction.

]]>
How successful was the 20th century communism experiment? https://greenash.net.au/thoughts/2017/11/how-successful-was-the-20th-century-communism-experiment/ Tue, 28 Nov 2017 00:00:00 +0000 https://greenash.net.au/thoughts/2017/11/how-successful-was-the-20th-century-communism-experiment/ During the course of the 20th century, virtually every nation in the world was affected, either directly or indirectly, by the "red tide" of communism. Beginning with the Russian revolution in 1917, and ostensibly ending with the close of the Cold War in 1991 (but actually not having any clear end, because several communist regimes remain on the scene to this day), communism was and is the single biggest political and economic phenomenon of modern times.

Communism – or, to be more precise, Marxism – made sweeping promises of a rosy utopian world society: all people are equal; from each according to his ability, to each according to his need; the end of the bourgeoisie, the rise of the proletariat; and the end of poverty. In reality, the nature of the communist societies that emerged during the 20th century was far from this grandiose vision.

Communism obviously was not successful in terms of the most obvious measure: namely, its own longevity. The world's first and its longest-lived communist regime, the Soviet Union, well and truly collapsed. The world's most populous country, the People's Republic of China, is stronger than ever, but effectively remains communist in name only (as does its southern neighbour, Vietnam).

However, this article does not seek to measure communism's success based on the survival rate of particular governments; nor does it seek to analyse (in any great detail) why particular regimes failed (and there's no shortage of other articles that do analyse just that). More important than whether the regimes themselves prospered or met their demise, is their legacy and their long-term impact on the societies that they presided over. So, how successful was the communism experiment, in actually improving the economic, political, and cultural conditions of the populations that experienced it?

Communism: at least the party leaders had fun!
Communism: at least the party leaders had fun!
Image source: FunnyJunk.

Success:

Failure:

Dudes, don't leave a comrade hanging.
Dudes, don't leave a comrade hanging.
Image source: FunnyJunk.

Closing remarks

Personally, I have always considered myself quite a "leftie": I'm a supporter of socially progressive causes, and in particular, I've always been involved with environmental movements. However, I've never considered myself a socialist or a communist, and I hope that this brief article on communism reflects what I believe are my fairly balanced and objective views on the topic.

Based on my list of pros and cons above, I would quite strongly tend to conclude that, overall, the communism experiment of the 20th century was not successful at improving the economic, political, and cultural conditions of the populations that experienced it.

I'm reluctant to draw comparisons, because I feel that it's a case of apples and oranges, and also because I feel that a pure analysis should judge communist regimes on their merits and faults, and on theirs alone. However, the fact is that, based on the items in my lists above, much more success has been achieved, and much less failure has occurred, in capitalist democracies, than has been the case in communist states (and the pinnacle has really been achieved in the world's socialist democracies). The Nordic Model – and indeed the model of my own home country, Australia – demonstrates that a high quality of life and a high level of equality are attainable without going down the path of Marxist Communism; indeed, arguably those things are attainable only if Marxist Communism is avoided.

I hope you appreciate what I have endeavoured to do in this article: that is, to avoid the question of whether or not communist theory is fundamentally flawed; to avoid a religious rant about the "evils" of communism or of capitalism; and to avoid judging communism based on its means, and to instead concentrate on what ends it achieved. And I humbly hope that I have stuck to that plan laudably. Because if one thing is needed more than anything else in the arena of analyses of communism, it's clear-sightedness, and a focus on the hard facts, rather than religious zeal and ideological ranting.

]]>
Using Python's namedtuple for mock objects in tests https://greenash.net.au/thoughts/2017/08/using-pythons-namedtuple-for-mock-objects-in-tests/ Sun, 13 Aug 2017 00:00:00 +0000 https://greenash.net.au/thoughts/2017/08/using-pythons-namedtuple-for-mock-objects-in-tests/ I have become quite a fan of Python's built-in namedtuple collection lately. As others have already written, despite having been available in Python 2.x and 3.x for a long time now, namedtuple continues to be under-appreciated and under-utilised by many programmers.

# The ol'fashioned tuple way
fruits = [
    ('banana', 'medium', 'yellow'),
    ('watermelon', 'large', 'pink')]

for fruit in fruits:
    print('A {0} is coloured {1} and is {2} sized'.format(
        fruit[0], fruit[2], fruit[1]))

# The nicer namedtuple way
from collections import namedtuple

Fruit = namedtuple('Fruit', 'name size colour')

fruits = [
    Fruit(name='banana', size='medium', colour='yellow'),
    Fruit(name='watermelon', size='large', colour='pink')]

for fruit in fruits:
    print('A {0} is coloured {1} and is {2} sized'.format(
        fruit.name, fruit.colour, fruit.size))

namedtuples can be used in a few obvious situations in Python. I'd like to present a new and less obvious situation, that I haven't seen any examples of elsewhere: using a namedtuple instead of MagicMock or flexmock, for creating fake objects in unit tests.

namedtuple vs the competition

namedtuples have a number of advantages over regular tuples and dicts in Python. First and foremost, a namedtuple is (by defintion) more semantic than a tuple, because you can define and access elements by name rather than by index. A namedtuple is also more semantic than a dict, because its structure is strictly defined, so you can be guaranteed of exactly which elements are to be found in a given namedtuple instance. And, similarly, a namedtuple is often more useful than a custom class, because it gives more of a guarantee about its structure than a regular Python class does.

A namedtuple can craft an object similarly to the way that MagicMock or flexmock can. The namedtuple object is more limited, in terms of what attributes it can represent, and in terms of how it can be swapped in to work in a test environment. But it's also simpler, and that makes it easier to define and easier to debug.

Compared with all the alternatives listed here (dict, a custom class, MagicMock, and flexmock – all except tuple), namedtuple has the advantage of being immutable. This is generally not such an important feature, for the purposes of mocking and running tests, but nevertheless, immutability always provides advantages – such as elimination of side-effects via parameters, and more thread-safe code.

Really, for me, the biggest "quick win" that you get from using namedtuple over any of its alternatives, is the lovely built-in string representation that the former provides. Chuck any namedtuple in a debug statement or a logging call, and you'll see everything you need (all the fields and their values) and nothing you don't (other internal attributes), right there on the screen.

# Printing a tuple
f1 = ('banana', 'medium', 'yellow')

# Shows all attributes ordered nicely, but no field names
print(f1)
# ('banana', 'medium', 'yellow')


# Printing a dict
f1 = {'name': 'banana', 'size': 'medium', 'colour': 'yellow'}

# Shows all attributes with field names, but ordering is wrong
print(f1)
# {'colour': 'yellow', 'size': 'medium', 'name': 'banana'}


# Printing a custom class instance
class Fruit(object):
    """It's a fruit, yo"""

f1 = Fruit()
f1.name = 'banana'
f1.size = 'medium'
f1.colour = 'yellow'

# Shows nothing useful by default! (Needs a __repr__() method for that)
print(f1)
# <__main__.Fruit object at 0x7f1d55400e48>

# But, to be fair, can print its attributes as a dict quite easily
print(f1.__dict__)
# {'size': 'medium', 'name': 'banana', 'colour': 'yellow'}


# Printing a MagicMock
from mock import MagicMock

class Fruit(object):
    name = None
    size = None
    colour = None

f1 = MagicMock(spec=Fruit)
f1.name = 'banana'
f1.size = 'medium'
f1.colour = 'yellow'

# Shows nothing useful by default! (and f1.__dict__ is full of a tonne
# of internal cruft, with the fields we care about buried somewhere
# amongst it all)
print(f1)
# <MagicMock spec='Fruit' id='140682346494552'>


# Printing a flexmock
from flexmock import flexmock

f1 = flexmock(name='banana', size='medium', colour='yellow')

# Shows nothing useful by default!
print(f1)
# <flexmock.MockClass object at 0x7f691ecefda0>

# But, to be fair, printing f1.__dict__ shows minimal cruft
print(f1.__dict__)
# {
#     'name': 'banana',
#     '_object': <flexmock.MockClass object at 0x7f691ecefda0>,
#     'colour': 'yellow', 'size': 'medium'}


# Printing a namedtuple
from collections import namedtuple

Fruit = namedtuple('Fruit', 'name size colour')
f1 = Fruit(name='banana', size='medium', colour='yellow')

# Shows exactly what we need: what it is, and what all of its
# attributes' values are. Sweeeet.
print(f1)
# Fruit(name='banana', size='medium', colour='yellow')

As the above examples show, without any special configuration, namedtuple's string configuration Just Works™.

namedtuple and fake objects

Let's say you have a simple function that you need to test. The function gets passed in a superhero, which it expects is a SQLAlchemy model instance. It queries all the items of clothing that the superhero uses, and it returns a list of clothing names. The function might look something like this:

# myproject/superhero.py


def get_clothing_names_for_superhero(superhero):
    """List the clothing for the specified superhero"""
    clothing_names = []

    clothing_list = superhero.clothing_items.all()

    for clothing_item in clothing_list:
        clothing_names.append(clothing_item.name)

    return clothing_names

Since this function does all its database querying via the superhero object that's passed in as a parameter, there's no need to mock anything via funky mock.patch magic or similar. You can simply follow Python's preferred pattern of duck typing, and pass in something – anything – that looks like a superhero (and, unless he takes his cape off, nobody need be any the wiser).

You could write a test for that function, using namedtuple-based fake objects, like so:

# myproject/superhero_test.py


from collections import namedtuple

from myproject.superhero import get_clothing_names_for_superhero


FakeSuperhero = namedtuple('FakeSuperhero', 'clothing_items name')
FakeClothingItem = namedtuple('FakeClothingItem', 'name')
FakeModelQuery = namedtuple('FakeModelQuery', 'all first')


def get_fake_superhero_and_clothing():
    """Get a fake superhero and clothing for test purposes"""
    superhero = FakeSuperhero(
        name='Batman',
        clothing_items=FakeModelQuery(
            first=lambda: None,
            all=lambda: [
                FakeClothingItem(name='cape'),
                FakeClothingItem(name='mask'),
                FakeClothingItem(name='boots')]))

    return superhero


def test_get_clothing_for_superhero():
    """Test listing the clothing for a superhero"""
    superhero = get_fake_superhero_and_clothing()

    clothing_names = set(get_clothing_names_for_superhero(superhero))

    # Verify that list of clothing names is as expected
    assert clothing_names == {'cape', 'mask', 'boots'}

The same setup could be achieved using one of the alternatives to namedtuple. In particular, a FakeSuperhero custom class would have done the trick. Using MagicMock or flexmock would have been fine too, although they're really overkill in this situation. In my opinion, for a case like this, using namedtuple is really the simplest and the most painless way to test the logic of the code in question.

In summary

I believe that namedtuple is a great choice for fake test objects, when it fits the bill, and I don't know why it isn't used or recommended for this in general. It's a choice that has some limitations: most notably, you can't have any attribute that starts with an underscore (the "_" character) in a namedtuple. It's also not particularly nice (although it's perfectly valid) to chuck functions into namedtuple fields, especially lambda functions.

Personally, I have used namedtuples in this way quite a bit recently, however I'm still ambivalent about it being the best approach. If you find yourself starting to craft very complicated FakeFoo namedtuples, then perhaps that's a sign that you're doing it wrong. As with everything, I think that this is an approach that can really be of value, if it's used with a degree of moderation. At the least, I hope you consider adding it to your tool belt.

]]>
The Jobless Games https://greenash.net.au/thoughts/2017/03/the-jobless-games/ Sun, 19 Mar 2017 00:00:00 +0000 https://greenash.net.au/thoughts/2017/03/the-jobless-games/ There is growing concern worldwide about the rise of automation, and about the looming mass unemployment that will logically result from it. In particular, the phenomenon of driverless cars – which will otherwise be one of the coolest and the most beneficial technologies of our time – is virtually guaranteed to relegate to the dustbin of history the "paid human driver", a vocation currently pursued by over 10 million people in the US alone.

Them robots are gonna take our jobs!
Them robots are gonna take our jobs!
Image source: Day of the Robot.

Most discussion of late seems to treat this encroaching joblessness entirely as an economic issue. Families without incomes, spiralling wealth inequality, broken taxation mechanisms. And, consequently, the solutions being proposed are mainly economic ones. For example, a Universal Basic Income to help everyone make ends meet. However, in my opinion, those economic issues are actually relatively easy to address, and as a matter of sheer necessity we will sort them out sooner or later, via a UBI or via whatever else fits the bill.

The more pertinent issue is actually a social and a psychological one. Namely: how will people keep themselves occupied in such a world? How will people nourish their ambitions, feel that they have a purpose in life, and feel that they make a valuable contribution to society? How will we prevent the malaise of despair, depression, and crime from engulfing those who lack gainful enterprise? To borrow the colourful analogy that others have penned: assuming that there's food on the table either way, how do we head towards a Star Trek rather than a Mad Max future?

Keep busy

The truth is, since the Industrial Revolution, an ever-expanding number of people haven't really needed to work anyway. What I mean by that is: if you think about what jobs are actually about providing society with the essentials such as food, water, shelter, and clothing, you'll quickly realise that fewer people than ever are employed in such jobs. My own occupation, web developer, is certainly not essential to the ongoing survival of society as a whole. Plenty of other occupations, particularly in the services industry, are similarly remote from humanity's basic needs.

So why do these jobs exist? First and foremost, demand. We live in a world of free markets and capitalism. So, if enough people decide that they want web apps, and those people have the money to make it happen, then that's all that's required for "web developer" to become and to remain a viable occupation. Second, opportunity. It needs to be possible to do that thing known as "developing web apps" in the first place. In many cases, the opportunity exists because of new technology; in my case, the Internet. And third, ambition. People need to have a passion for what they do. This means that, ideally, people get to choose an occupation of their own free will, rather than being forced into a certain occupation by their family or by the government. If a person has a natural talent for his or her job, and if a person has a desire to do the job well, then that benefits the profession as a whole, and, in turn, all of society.

Those are the practical mechanisms through which people end up spending much of their waking life at work. However, there's another dimension to all this, too. It is very much in the interest of everyone that makes up "the status quo" – i.e. politicians, the police, the military, heads of big business, and to some extent all other "well to-do citizens" – that most of society is caught up in the cycle of work. That's because keeping people busy at work is the most effective way of maintaining basic law and order, and of enforcing control over the masses. We have seen throughout history that large-scale unemployment leads to crime, to delinquency and, ultimately, to anarchy. Traditionally, unemployment directly results in poverty, which in turn directly results in hunger. But even if the unemployed get their daily bread – even if the crisis doesn't reach let them eat cake proportions – they are still at risk of falling to the underbelly of society, if for no other reason, simply due to boredom.

So, assuming that a significantly higher number of working-age men and women will have significantly fewer job prospects in the immediate future, what are we to do with them? How will they keep themselves occupied?

The Games

I propose that, as an alternative to traditional employment, these people engage in large-scale, long-term, government-sponsored, semi-recreational activities. These must be activities that: (a) provide some financial reward to participants; (b) promote physical health and social well-being; and (c) make a tangible positive contribution to society. As a massive tongue-in-cheek, I call this proposal "The Jobless Games".

My prime candidate for such an activity would be a long-distance walk. The journey could take weeks, months, even years. Participants could number in the hundreds, in the thousands, even in the millions. As part of the walk, participants could do something useful, too; for example, transport non-urgent goods or mail, thus delivering things that are actually needed by others, and thus competing with traditional freight services. Walking has obvious physical benefits, and it's one of the most social things you can do while moving and being active. Such a journey could also be done by bicycle, on horseback, or in a variety of other modes.

How about we all just go for a stroll?
How about we all just go for a stroll?
Image source: The New Paper.

Other recreational programs could cover the more adventurous activities, such as climbing, rafting, and sailing. However, these would be less suitable, because: they're far less inclusive of people of all ages and abilities; they require a specific climate and geography; they're expensive in terms of equipment and expertise; they're harder to tie in with some tangible positive end result; they're impractical in very large groups; and they damage the environment if conducted on too large a scale.

What I'm proposing is not competitive sport. These would not be races. I don't see what having winners and losers in such events would achieve. What I am proposing is that people be paid to participate in these events, out of the pocket of whoever has the money, i.e. governments and big business. The conditions would be simple: keep up with the group, and behave yourself, and you keep getting paid.

I see such activities co-existing alongside whatever traditional employment is still available in future; and despite all the doom and gloom predictions, the truth is that there always has been real work out there, and there always will be. My proposal is that, same as always, traditional employment pays best, and thus traditional employment will continue to be the most attractive option for how to spend one's days. Following that, "The Games" pay enough to get by on, but probably not enough to enjoy all life's luxuries. And, lastly, as is already the case in most first-world countries today, for the unemployed there should exist a social security payment, and it should pay enough to cover life's essentials, but no more than that. We already pay people sit down money; how about a somewhat more generous payment of stand up money?

Along with these recreational activities that I've described, I think it would also be a good idea to pay people for a lot of the work that is currently done by volunteers without financial reward. In a future with less jobs, anyone who decides to peel potatoes in a soup kitchen, or to host bingo games in a nursing home, or to take disabled people out for a picnic, should be able to support him- or herself and to live in a dignified manner. However, as with traditional employment, there are also only so many "volunteer" positions that need filling, and even with that sector significantly expanded, there would still be many people left twiddling their thumbs. Which is why I think we need some other solution, that will easily and effectively get large numbers of people on their feet. And what better way to get them on their feet, than to say: take a walk!

Large-scale, long-distance walks could also solve some other problems that we face at present. For example, getting a whole lot of people out of our biggest and most crowded cities, and "going on tour" to some of our smallest and most neglected towns, would provide a welcome economic boost to rural areas, considering all the support services that such activities would require; while at the same time, it would ease the crowding in the cities, and it might even alleviate the problem of housing affordability, which is acute in Australia and elsewhere. Long-distance walks in many parts of the world – particularly in Europe – could also provide great opportunities for an interchange of language and culture.

In summary

There you have it, my humble suggestion to help fill the void in peoples' lives in the future. There are plenty of other things that we could start paying people to do, that are more intellectual and that make a more tangible contribution to society: e.g. create art, be spiritual, and perform in music and drama shows. However, these things are too controversial for the government to support on such a large scale, and their benefit is a matter of opinion. I really think that, if something like this is to have a chance of succeeding, it needs to be dead simple and completely uncontroversial. And what could be simpler than walking?

Whatever solutions we come up with, I really think that we need to start examining the issue of 21st-century job redundancy from this social angle. The economic angle is a valid one too, but it has already been analysed quite thoroughly, and it will sort itself out with a bit of ingenuity. What we need to start asking now is: for those young, fit, ambitious people of the future that lack job prospects, what activity can they do that is simple, social, healthy, inclusive, low-impact, low-cost, and universal? I'd love to hear any further suggestions you may have.

]]>
Ten rival national top cities of the world https://greenash.net.au/thoughts/2016/12/ten-rival-national-top-cities-of-the-world/ Fri, 09 Dec 2016 00:00:00 +0000 https://greenash.net.au/thoughts/2016/12/ten-rival-national-top-cities-of-the-world/ Most countries have one city which is clearly top of the pops. In particular, one city (which may not necessarily be the national capital) is usually the largest population centre and the main economic powerhouse of a given country. Humbly presented here is a quick and not-overly-scientific list of ten countries that are an exception to this rule. That is, countries where two cities (or more!) vie neck-and-neck for the coveted top spot.

Note: all population statistics are the latest numbers on relevant country- or city-level Wikipedia pages, as of writing, and all are for the cities' metropolitan area or closest available equivalent. The list is presented in alphabetical order by country.

Australia: Sydney and Melbourne

Sydney vs Melbourne
Sydney vs Melbourne
Image sources: Visit NSW, Tourism Australia.

As all my fellow Aussies can attest, Sydney (pop: 4.9m) and Melbourne (pop: 4.5m) well and truly deserve to be at the top of this list. Arguably, no other two cities in the world are such closely-matched rivals. As well as their similarity in population size and economic prowess, Sydney and Melbourne have also been ruthlessly competing for cultural, political and touristic dominance, for most of Australia's (admittedly short) history.

Both cities have hosted the Summer Olympics (Melbourne in 1956, Sydney in 2000). Sydney narrowly leads in population and economic terms, but Melbourne proudly boasts being "the cultural capital of Australia". The national capital, Canberra, was built roughly halfway between Sydney and Melbourne, precisely because the two cities couldn't agree on which one should be the capital.

China: Shanghai and Beijing

Shanghai vs Beijing
Shanghai vs Beijing
Image sources: Gensler Design, MapQuest.

In the world's most populous country, the port city Shanghai (pop: 24.5m) and the capital Beijing (pop: 21.1m) compete to be Number One. These days, Shanghai is marginally winning on the population and economic fronts, but Beijing undoubtedly takes the lead in the political, cultural and historic spheres.

It should also be noted that China's third-most populous city, Guangzhou (pop: 20.8m), and its (arguably) fourth-most populous city, Shenzhen (pop: 18m), are close runners-up to Shanghai and Beijing in population and economic terms. The neighbouring cities of Guangzhou and Shenzhen, together with other adjacent towns and cities, make up what is now the world's most populous urban area, the Pearl River Delta Megacity. This area has a population of 44m, which can even jump to 54m if the nearby islands of Hong Kong are included.

Ecuador: Guayaquil and Quito

Guayaquil vs Quito
Guayaquil vs Quito
Image sources: Grand Hotel Guayaquil, Lonely Planet.

Ecuador's port city Guayaquil (pop: 5.0m) and its capital Quito (pop: 4.2m) are the only pair of cities from Latin America to feature on the list. Most Latin American countries are well and truly dominated by one big urban area. In Ecuador, Guayaquil is the economic powerhouse, while Quito is the nation's political and cultural heart.

Germany: Berlin and Hamburg

Berlin vs Hamburg
Berlin vs Hamburg
Image sources: Slate, Educational Geography.

The urban areas of the capital Berlin (pop: 6.0m) and the port city Hamburg (pop: 5.1m) are (arguably) the two largest in the Bundesrepublik Deutschland. These cities vie closely for economic muscle, and both are also rich historic and cultural centres of Germany.

However, Germany is truly one of the most balanced countries in the world, in terms of having numerous cities that contend for being the top population and economic centre of the land. There are also Munich (pop: 4.5m) and Stuttgart (pop: 4.0m), the southernmost of the nation's big cities. Plus there are the "urban mega-regions" of the Ruhr (pop: 8.5m), and Frankfurt Rhine-Main (pop: 5.8m), which are too spread-out to be considered single metropolitan areas (and which lack a single metro area with the large population of the big cities), but which are key centres nonetheless. Unsurprisingly, the very geographical layout of the nation's cities are yet another testament to German planning and efficiency.

Italy: Rome and Milan

Rome vs Milan
Rome vs Milan
Image sources: Amalfi Coast Destination, I Like Going Out.

In La Bella Italia, Rome (pop: 4.3m) and Milan (pop: 4.2m) are the two most populous cities by a fair stretch. With its formidable fashion and finance industries (among many others), Milan is quite clearly the top economic centre of Italy.

In terms of culture, few other pairs of cities can boast such a grand and glorious rivalry as that of Rome and Milan. Naturally, with its Roman Empire legacy, and as the home of the Vatican (making Rome virtually unique globally in being a city with another country inside it!), Rome wins hands-down on the historical, political and touristic fronts. But in terms of art, cuisine, and media (to name a few), Milan packs a good punch. However, most everywhere in Italy pulls ahead of its weight in those areas, including the next-largest urban areas of Naples, Turin, Venice and Florence.

India: Delhi and Mumbai

Delhi vs Mumbai
Delhi vs Mumbai
Image sources: Swaminarayan Akshardham New Delhi, FSSAI Consultants in Mumbai.

In the world's second-most-populous country, the mega-cities of Delhi (pop: 21.8m) and Mumbai (pop: 20.8m) compete for people, business, and chaos. Delhi takes the cake politically, culturally, historically, and (as I can attest from personal experience) chaotically. Mumbai, a much newer city – only really having come into existence since the days of the British Raj – is the winner economically.

The next most populous cities of India – Kolkata, Bangalore, and Chennai – are also massive population centres in their own right, and they're not far behind Delhi and Mumbai in terms of national importance.

South Africa: Johannesburg and Cape Town

Johannesburg vs Cape Town
Johannesburg vs Cape Town
Image sources: Awesome Work and Travel, Cape Town International Airport.

South Africa is the only African nation to make this list. Its two chief cities are the sprawling metropolis of Johannesburg (pop: 4.4m), and the picturesque port city of Cape Town (pop: 3.7m). Johannesburg is not only the economic powerhouse of South Africa, but indeed of all Africa. Cape Town, on the other hand, is the historic centre of the land, and with the sea hugging its shores and the distinctive Table Mountain looming big behind, it's also a place of great natural beauty.

Spain: Madrid and Barcelona

Madrid vs Barcelona
Madrid vs Barcelona
Image sources: Wall Street Journal, Happy People Barcelona.

El Reino de España is dominated by the two big cities of Madrid (pop: 6.3m) and Barcelona (pop: 5.4m). Few other pairs of cities in the world fight so bitterly for economic and cultural superiority, and on those fronts, in Spain there is no clear winner. Having spent much of its history as the head of its own independent kingdom of Catalonia, Barcelona has a rich culture of its own. And while Madrid is the political capital of modern Spain, Barcelona is considered the more modern metropolis, and has established itself as the "cosmopolitan capital" of the land.

Madrid and Barcelona are not the only twin cities in this list where different languages are spoken, and where historically the cities were part of different nations or kingdoms. However, they are the only ones where open hostility exists and is a major issue to this day: a large faction within Catalonia (including within Barcelona) is engaged in an ongoing struggle to secede from Spain, and the animosity resulting from this struggle is both real and unfortunate.

United States: New York and Los Angeles

New York vs Los Angeles
New York vs Los Angeles
Image sources: Short Term Rentals NYC, Megalopolis Now.

The two biggest urban areas in Uncle Sam, New York (pop: 23.7m) and Los Angeles (pop: 18.7m), differ in many ways apart from just being on opposite coasts. Both are economic and cultural powerhouses: NYC with its high finance and its music / theatre prowess; LA with Hollywood and show biz. The City That Never Sleeps likes to think of itself as the beating heart of the USA (and indeed the world!), while the City of Angels doesn't take itself too seriously, in true California style.

These are the two biggest, but they are by no means the only big boys in town. The nation's next-biggest urban areas – Chicago, Washington-Baltimore, San Francisco Bay Area, Boston, Dallas, Philadelphia, Houston, Miami, and Atlanta (all with populations between 6m and 10m) – are spread out all across the continental United States, and they're all vibrant cities and key economic hubs.

Vietnam: Ho Chi Minh and Hanoi

Ho Chi Minh vs Hanoi
Ho Chi Minh vs Hanoi
Image sources: Pullman Hotels, Lonely Planet.

Finally, in the long and thin nation of Vietnam, the two river delta cities of Ho Chi Minh (pop: 8.2m) in the south, and Hanoi (pop: 7.6m) in the north, have for a long time been the country's key twin hubs. During the Vietnam War, these cities became the respective national capitals of the independent Democratic South and Communist North Vietnam; but these days, Vietnam is well and truly unified, and north and south fly under the same flag.

Conclusion

That's it, my non-authoritative list of rival top cities in various countries around the world. I originally included more pairs of cities in the list, but I culled it down to only include cities that were very closely matched in population size. Numerous other contenders for this list consist of a City A that's bigger, and a City B that's smaller but is more famous or more historic than its twin. Anyway, hope you like my selection, feedback welcome.

Crossed swords image source: openclipart.

]]>
Orientalists of the East India Company https://greenash.net.au/thoughts/2016/10/orientalists-of-the-east-india-company/ Tue, 18 Oct 2016 00:00:00 +0000 https://greenash.net.au/thoughts/2016/10/orientalists-of-the-east-india-company/ The infamous East India Company, "the Company that Owned a Nation", is remembered harshly by history. And rightly so. On the whole, it was an exploitative venture, and the British individuals involved with it were ruthless opportunists. The Company's actions directly resulted in the impoverishment, the subjugation, and in several instances the death of countless citizens of the Indian Subcontinent.

Company rule, and the subsequent rule of the British Raj, are also acknowledged as contributing positively to the shaping of Modern India, having introduced the English language, built the railways, and established political and military unity. But these are overshadowed by its legacy of corporate greed and wholesale plunder, which continues to haunt the region to this day.

I recently read Four Heroes of India (1898), by F.M. Holmes, an antique book that paints a rose-coloured picture of Company (and later British Government) rule on the Subcontinent. To the modern reader, the book is so incredibly biased in favour of British colonialism that it would be hilarious, were it not so alarming. Holmes's four heroes were notable military and government figures of 18th and 19th century British India.

Clive, Hastings, Havelock, Lawrence; with a Concluding Note on the Rule of Lord Mayo.
Clive, Hastings, Havelock, Lawrence; with a Concluding Note on the Rule of Lord Mayo.
Image source: eBay.

I'd like to present here four alternative heroes: men (yes, sorry, still all men!) who in my opinion represented the British far more nobly, and who left a far more worthwhile legacy in India. All four of these figures were founders or early members of The Asiatic Society (of Bengal), and all were pioneering academics who contributed to linguistics, science, and literature in the context of South Asian studies.

William Jones

The first of these four personalities was by far the most famous and influential. Sir William Jones was truly a giant of his era. The man was nothing short of a prodigy in the field of philology (which is arguably the pre-modern equivalent of linguistics). During his productive life, Jones is believed to have become proficient in no less than 28 languages, making him quite the polyglot:

Eight languages studied critically: English, Latin, French, Italian, Greek, Arabic, Persian, Sanscrit [sic]. Eight studied less perfectly, but all intelligible with a dictionary: Spanish, Portuguese, German, Runick [sic], Hebrew, Bengali, Hindi, Turkish. Twelve studied least perfectly, but all attainable: Tibetian [sic], Pâli [sic], Pahlavi, Deri …, Russian, Syriac, Ethiopic, Coptic, Welsh, Swedish, Dutch, Chinese. Twenty-eight languages.

Source: Memoirs of the Life, Writings and Correspondence, of Sir William Jones, John Shore Baron Teignmouth, 1806, Page 376.

Portrait of Sir William Jones.
Portrait of Sir William Jones.
Image source: Wikimedia Commons.

Jones is most famous in scholarly history for being the person who first proposed the linguistic family of Indo-European languages, and thus for being one of the fathers of comparative linguistics. His work laid the foundations for the theory of a Proto-Indo-European mother tongue, which was researched in-depth by later linguists, and which is widely accepted to this day as being a language that existed and that had a sizeable native speaker population (despite there being no concrete evidence for it).

Jones spent 10 years in India, working in Calcutta as a judge. During this time, he founded The Asiatic Society of Bengal. Jones was the foremost of a loosely-connected group of British gentlemen who called themselves orientalists. (At that time, "oriental studies" referred primarily to India and Persia, rather than to China and her neighbours as it does today.)

Like his peers in the Society, Jones was a prolific translator. He produced the authoritative English translation of numerous important Sanskrit documents, including Manu Smriti (Laws of Manu), and Abhiknana Shakuntala. In the field of his "day job" (law), he established the right of Indian citizens to trial by jury under Indian jurisprudence. Plus, in his spare time, he studied Hindu astronomy, botany, and literature.

James Prinsep

The numismatist James Prinsep, who worked at the Benares (Varanasi) and Calcutta mints in India for nearly 20 years, was another of the notable British orientalists of the Company era. Although not quite in Jones's league, he was nevertheless an intelligent man who made valuable contributions to academia. His life was also unfortunately short: he died at the age of 40, after falling sick of an unknown illness and failing to recover.

Portrait of James Prinsep.
Portrait of James Prinsep.
Image source: Wikimedia Commons.

Prinsep was the founding editor of the Journal of the Asiatic Society of Bengal. He is best remembered as the pioneer of numismatics (the study of coins) on the Indian Subcontinent: in particular, he studied numerous coins of ancient Bactrian and Kushan origin. Prinsep also worked on deciphering the Kharosthi and Brahmi scripts; and he contributed to the science of meteorology.

Charles Wilkins

The typographer Sir Charles Wilkins arrived in India in 1770, several years before Jones and most of the other orientalists. He is considered the first British person in Company India to have mastered the Sanskrit language. Wilkins is best remembered as having created the world's first Bengali typeface, which became a necessity when he was charged with printing the important text A Grammar of the Bengal Language (the first book written in Bengali to ever be printed), written by fellow orientalist Nathaniel Brassey Halhed, and more-or-less commissioned by Governor Warren Hastings.

It should come as no surprise that this pioneering man was one of the founders of The Asiatic Society of Bengal. Like many of his colleagues, Wilkins left a proud legacy as a translator: he was the first person to translate into English the Bhagavad Gita, the most revered holy text in all of Hindu lore. He was also the first director of the "India Office Library".

H. H. Wilson

The doctor Horace Hayman Wilson was in India slightly later than the other gentlemen listed here, not having arrived in India (as a surgeon) until 1808. Wilson was, for a part of his time in Company India, honoured with the role of Secretary of the Asiatic Society of Bengal.

Wilson was one of the key people to continue Jones's great endeavour of bridging the gap between English and Sanskrit. His key contribution was writing the world's first comprehensive Sanskrit-English dictionary. He also translated the Meghaduuta into English. In his capacity as a doctor, he researched and published on the matter of traditional Indian medical practices. He also advocated for the continued use of local languages (rather than of English) for instruction in Indian native schools.

The legacy

There you have it: my humble short-list of four men who represent the better side of the British presence in Company India. These men, and other orientalists like them, are by no means perfect, either. They too participated in the Company's exploitative regime. They too were part of the ruling elite. They were no Mother Teresa (the main thing they shared in common with her was geographical location). They did little to help the day-to-day lives of ordinary Indians living in poverty.

Nevertheless, they spent their time in India focused on what I believe were noble endeavours; at least, far nobler than the purely military and economic pursuits of many of their peers. Their official vocations were in administration and business enterprise, but they chose to devote themselves as much as possible to academia. Their contributions to the field of language, in particular – under that title I include philology, literature, and translation – were of long-lasting value not just to European gentlemen, but also to the educational foundations of modern India.

In recent times, the term orientalism has come to be synonymous with imperialism and racism (particularly in the context of the Middle East, not so much for South Asia). And it is argued that the orientalists of British India were primarily concerned with strengthening Company rule by extracting knowledge, rather than with truly embracing or respecting India's cultural richness. I would argue that, for the orientalists presented here at least, this was not the case: of course they were agents of British interests, but they also genuinely came to respect and admire what they studied in India, rather than being contemptuous of it.

The legacy of British orientalism in India was, in my opinion, one of the better legacies of British India in general. It's widely acknowledged that it had a positive long-term educational and intellectual effect on the Subcontinent. It's also a topic about which there seems to be insufficient material available – particularly regarding the biographical details of individual orientalists, apart from Jones – so I hope this article is useful to anyone seeking further sources.

]]>
Where is the official centre of Sydney? https://greenash.net.au/thoughts/2016/03/where-is-the-official-centre-of-sydney/ Mon, 28 Mar 2016 00:00:00 +0000 https://greenash.net.au/thoughts/2016/03/where-is-the-official-centre-of-sydney/ There are several different ways of commonly identifying the "official centre point" of a city. However, there's little international consensus as to the definition of such a point, and in many countries and cities the definition is quite vague.

Most reliable and most common, is to declare a Kilometre Zero marker as a city's (and often a region's or even a country's) official centre. Also popular is the use of a central post office for this purpose. Other traditional centre points include a city's cathedral, its main railway station, its main clock tower (which may be atop the post office / cathedral / railway station), its tallest building, its central square, its seat of government, its main park, its most famous tourist landmark, or the historical spot at which the city was founded.

Satellite photo of Sydney CBD, annotated with locations of
Satellite photo of Sydney CBD, annotated with locations of "official centre" candidates.
Image source: Satellite Imaging Corp.

My home town of Sydney, Australia, is one of a number of cities worldwide that boasts most of the above landmarks, but all in different locations, and without any mandated rule as to which of them constitutes the official city centre. So, where exactly in Sydney does X mark the spot?

Martin Place

I'll start with the spot that most people – Sydneysiders and visitors alike – commonly consider to be Sydney's central plaza these days: Martin Place. Despite this high esteem that it enjoys, in typical unplanned Sydney fashion, Martin Place was actually never intended to even be a large plaza, let alone the city's focal point.

Martin Place from the western end, as it looks today.
Martin Place from the western end, as it looks today.
Image source: Wikimedia Commons.

The original "Martin Place" (for much of the 1800s) was a small laneway called Moore St between George and Pitt streets, similar to nearby Angel Place (which remains a laneway to this day). In 1892, just after the completion of the grandiose GPO Building at its doorstep, Martin Place was widened and was given its present name. It wasn't extended to Macquarie St, nor made pedestrian-only, until 1980 (just after the completion of the underground Martin Place Station).

The chief justification for Martin Place being a candidate on this list, is that it's the home of Sydney's central post office. The GPO building also has an impressive clock tower sitting atop it. In addition, Martin Place is home to the Reserve Bank of Australia, and the NSW Parliament and the State Library of NSW are very close to its eastern end. It's also geographically smack-bang in the centre of the "business end" of Sydney's modern CBD, and it's culturally and socially acknowledged as the city's centre.

Town Hall

If you ask someone on the street in Sydney where the city's "central spot" is, and if he/she hesitates for a moment, chances are that said person is tossing up between Martin Place and Town Hall. When saying the name "Town Hall", you could be referring to the underground train station (one of Sydney's busiest), to the Town Hall building itself, to Town Hall Square, or (most likely) to all of the above. Scope aside, Town Hall is one of the top candidates for being called the centre of Sydney.

View of the Town Hall building, busy George and Druitt Streets, and St Andrews Cathedral.
View of the Town Hall building, busy George and Druitt Streets, and St Andrews Cathedral.
Image source: FM Magazine.

As with Martin Place, Town Hall was never planned to either resemble its current form, nor to be a centric location. Indeed, during the early colonial days, the site in question was on the outskirts of Sydney Town, and was originally a cemetery. The Town Hall building was opened in the 1890s.

In terms of qualifying as the potential centre of Sydney, Town Hall has a lot going for it. As its name suggests, it's home to the building which is the seat of local government for the City of Sydney (the building also has a clock tower). With its sprawling underground train station, with numerous bus stops in and adjacent to it, and with its location at the intersection of major thoroughfares George St and Park / Druitt St, Town Hall is – in practice – Sydney's most important transport hub. It's home to St Andrew's Cathedral, the head of the Anglican church in Sydney. And it's adjacent to the Queen Victoria Building, which – although it has no official role – is considered one of Sydney's most beautiful buildings.

Town Hall is also in an interesting position in terms of urban geography. It's one of the more southerly candidates for "official centre". To its north, where the historic heart of Sydney lies, big businesses and workers in suits dominate. While to its south lies the "other half" of Sydney's CBD: some white-collar business, but more entertainment, restaurants, sleaze, and shopping. It could be said that Town Hall is where these two halves of the city centre meet and mingle.

Macquarie Place

I've now covered the two spots that most people would likely think of as being the centre of Sydney, but which were never historically planned as such, and which "the powers that be" have never clearly proclaimed as such. The next candidate is a spot which was actually planned to be the official city centre (at least, as much as anything has ever been "planned" in Sydney), but which today finds itself at the edge of the CBD, and which few people have even heard of.

The New South Wales
The New South Wales "Kilometre Zero" obelisk in Macquarie Place.
Image source: City Art Sydney.

Macquarie Place is a small, leafy, triangular-shaped plaza on the corner of Bridge and Loftus streets, one block from Circular Quay. Commissioned by Governor Lachlan Macquarie (most famous of all NSW governors, in whose honour ten gazillion things in Sydney and NSW are named), and completed in 1810, it is the oldest public space in Australia. It's also the closest (of the places on this list) to the spot where Sydney was founded, in present-day Loftus St.

At the time, the area of Macquarie Place was the geographic centre of Sydney Town. The original colonial settlement clung to what is today Circular Quay, as all trade and transport with the rest of the world was via the shipping in Sydney Cove. The early town also remained huddled close to the Tank Stream, which ran between Pitt and George streets before discharging into the harbour (today the Tank Stream has been entirely relegated to a stormwater drain), and which was Sydney's sole fresh water supply for many years. The "hypotenuse" edge of Macquarie Place originally ran alongside the Tank Stream; indeed, the plaza's triangular shape was due to the Tank Stream fanning out into a muddy delta (all long gone and below the ground today) as it approached the harbour.

James Meehan's 1803 map of Sydney, with the original Tank Stream marked.
James Meehan's 1803 map of Sydney, with the original Tank Stream marked.
Image source: National Library of Australia.

The most striking and significant feature of Macquarie Place is its large stone obelisk, which was erected in 1818, and which remains the official Kilometre Zero marker of Sydney and of NSW to this day. The obelisk lists the distances, in miles, to various towns in the greater Sydney region. As is inscribed in the stonework, its purpose is:

To record that all the
Public Roads
Leading to the Interior
of the Colony
are Measured from it.

So, if it's of such historical importance, why is Macquarie Place almost unheard-of by Sydney locals and visitors alike? Well, first and foremost, the fact is that it's no longer the geographical, cultural, or commercial heart of the city. That ship sailed south some time ago. Also, apart from its decline in fame, Macquarie Place has also suffered from being literally, physically eroded over the years. The size of the plaza was drastically reduced in the 1840s, when Loftus St was built to link Bridge St to Circular Quay, and the entire eastern half of Macquarie Place was lost. The relatively small space is now also dwarfed by the skyscrapers that loom over it on all sides.

Macquarie Place is today a humble, shady, tranquil park in the CBD's north, frequented by tour groups and by a few nearby office workers. It certainly doesn't feel like the centre of a city of over 4 million people. However, it was declared Sydney's "town square" when it was inaugurated, and no other spot has been declared its successor ever since. So, I'd say that if you ask a Sydney history buff, then he/she would surely have to concede that Macquarie Place remains the official city centre.

Pitt St Mall

With the top three candidates done, I'll now cover the other punters that might contend for centre stage in Sydney. However, I doubt that anyone would seriously considers these other spots to be in the running. I'm just listing them for completeness. First off is Pitt St Mall.

Sydney's central shopping precinct of Pitt St Mall, with Sydney Tower visibly adjacent to it.
Sydney's central shopping precinct of Pitt St Mall, with Sydney Tower visibly adjacent to it.
Image source: Structural & Civil Engineers.

Pitt St is one of the oldest streets in Sydney. However, there was never any plan for it to house a plaza. For much of its history (for almost 100 years), one of Sydney's busiest and longest-serving tram lines ran up its entire length. Since at least the late 1800s, the middle section of Pitt St – particularly the now pedestrian-only area between King and Market streets – has been Sydney's prime retail and fashion precinct. Some time in the late 1980s, this area was closed to traffic, and it's been known as Pitt St Mall ever since.

Pitt St Mall does actually tick several boxes as a contender for "official city centre". First and foremost, it is geographically the centre of Sydney's modern CBD, lying exactly in the middle between Martin Place and Town Hall. It's also home to Sydney Tower, the city's tallest structure. Plus, it's where the city's heaviest concentration of shops and of shopping centres can be found. However, the Mall has no real historical, cultural, or social significance. It exists purely to enhance the retail experience of the area.

Central Station

Despite its name, Sydney's Central Railway Station is not in the middle of the city, but rather on the southern fringe of the CBD. Like its more centric cousin Town Hall, the site of Central Station was originally a cemetery (and the station itself was originally just south of its present location). Today, Central is Sydney's busiest passenger station. We Sydneysiders aren't taught in school that it's All Stations To Central for nothing.

Central Station as seen from adjacent Eddy Ave.
Central Station as seen from adjacent Eddy Ave.
Image source: Weekend Notes.

Central Station is the most geographically far-flung of the candidates listed in this article, and due to this, few people (if any) would seriously vote for it as Sydney's official centre. However, it does have some points in its favour. It is the city's main train station. Its Platform 1 is the official Kilometre Zero point of the NSW train network. And its clock tower dictates the official time of NSW trains (and, by extension, the official civil time in NSW).

Hyde Park

Although it's quite close to Town Hall and Pitt St Mall distance-wise, Hyde Park hugs the eastern edge of the Sydney CBD, rather than commanding centre stage. Inaugurated by Big Mac in 1810, together with Macquarie Place, Hyde Park is Sydney's oldest park, as well as its official main park.

Expansive view looking south-west upon Hyde Park.
Expansive view looking south-west upon Hyde Park.
Image source: Floodslicer.

Macquarie's architect, Francis Greenway, envisaged Hyde Park eventually becoming Sydney's town square, however this never eventuated. Despite being Sydney's oldest park, present-day Hyde Park is also quite unrecognisable from its original form, having been completely re-designed and rebuilt several times. The obelisk at the head of Bathurst St, erected in 1857 (it's actually a sewer vent!), is probably the oldest artifact of the park that remains unchanged.

As well as being central Sydney's main green space, Hyde Park is also home to numerous important adjacent buildings, including St Mary's Cathedral (head of the Sydney Catholic Archdiocese), St James Church (the oldest church building in Sydney), the Supreme Court of NSW, and Hyde Park Barracks. Plus, Hyde Park boasts a colourful history, whose many anecdotes comprise an important part of the story of Sydney.

The Rocks Square

The place I'm referring to here doesn't even have a clearly-defined name. As far as I can tell, it's most commonly known as (The) Rocks Square, but it could also be called Clocktower Square (for the tower and shopping arcade adjacent to it), Argyle St Mall, Argyle St Market, or just "in front of The Argyle" (for the adjacent historic building and present-day nightclub). At any rate, I'm talking about the small, pedestrian-only area at the eastern end of Argyle St, in Sydney's oldest precinct, The Rocks.

Weekend markets at the eastern end of Argyle St, in The Rocks.
Weekend markets at the eastern end of Argyle St, in The Rocks.
Image source: David Ing.

This spot doesn't have a whole lot going for it. As I said, it's not even named properly, and it's not an official square or park of any sort. However, it's generally considered to be the heart of The Rocks, and in Sydney's earliest days it was the rough location of the city's social and economic centre. Immediately to the west of The Rocks Square, you can walk or drive through the Argyle Cut, which was the first major earth-moving project in Sydney's history. Today, The Rocks Square is a busy pedestrian thoroughfare, especially on weekends when the popular Rocks Markets are in full swing. And one thing that hasn't changed a bit since day one: there's no shortage of pubs, and other watering-holes, in and around this spot.

Darling Harbour

I'm only reluctantly including Darling Harbour on this list (albeit lucky last): clearly off to the west of the CBD proper, it's never been considered the "official centre" of Sydney by anyone. For much of its history, Darling Harbour was home to a collection of dirty, seedy dockyards that comprised the city's busiest port. The area was completely overhauled as the showpiece of Sydney's celebrations for the 1988 Australian Bicentennary celebrations. Since then, it's been one of Sydney's most popular tourist spots.

The centre of Darling Harbour, close the the IMAX Theatre.
The centre of Darling Harbour, close the the IMAX Theatre.
Image source: Wikimedia Commons.

Other than being a tourist trap, Darling Harbour's main claim to entitlement on this list is that it hosts the Sydney Convention Centre (the original centre was recently demolished, and is currently being rebuilt on a massive scale). The key pedestrian thoroughfare of Darling Harbour, just next to the IMAX Theatre (i.e. the spot in question for this list), is unfortunately situated directly below the Western Distributor, a large freeway that forms a roof of imposing concrete.

Final thoughts

Hope you enjoyed this little tour of the contenders for "official centre" of Sydney. Let me know if you feel that any other spots are worthy of being in the race. As for the winner: I selected what I believe are the three finalists, but I'm afraid I can't declare a clear-cut winner from among them. Purists would no doubt pick Macquarie Place, but in my opinion Martin Place and Town Hall present competition that can't be ignored.

Sydney at sunset, with the city centre marked in dark red.
Sydney at sunset, with the city centre marked in dark red.
Image source: SkyscraperCity.

Who knows? Perhaps the illustrious Powers That Be – in this case, the NSW Government and/or the Sydney City Council – will, in the near future, clarify the case once and for all. Then again, considering the difficulty of choice (as demonstrated in this article), and considering the modus operandi of the guv'ment around here, it will probably remain in the "too hard" basket for many years to come.

]]>
Where there be no roads https://greenash.net.au/thoughts/2016/03/where-there-be-no-roads/ Wed, 09 Mar 2016 00:00:00 +0000 https://greenash.net.au/thoughts/2016/03/where-there-be-no-roads/ And now for something completely different, here's an interesting question. What terra firma places in the world are completely without roads? Where in the world will you find large areas, in which there are absolutely no official vehicle routes?

A road (of sorts) slowly being extended into the vast nothingness of Siberia.
A road (of sorts) slowly being extended into the vast nothingness of Siberia.
Image source: Favourite picture: Road construction in Siberia – RoadStars.

Naturally, such places also happen to be largely bereft of any other human infrastructure, such as buildings; and to be largely bereft of any human population. These are places where, in general, nothing at all is to be encountered save for sand, ice, and rock. However, that's just coincidental. My only criteria, for the purpose of this article, is a lack of roads.

Alaska

I was inspired to write this article after reading James Michener's epic novel Alaska. Before reading that book, I had only a vague knowledge of most things about Alaska, including just how big, how empty, and how inaccessible it is.

Map of Alaska: areas without roads (approximate) are highlighted in red.
Map of Alaska: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

One might think that, on account of it being part of the United States, Alaska boasts a reasonably comprehensive road network. Think again. Unlike the "Lower 48" (as the contiguous USA is referred to up there), only a small part of Alaska has roads of any sort at all, and that's the south-east corner around Anchorage and Fairbanks. And even there, the main routes are really part of the American Interstate network only on paper.

As you can see from the map, the entire western part of the state, and most of the north of the state, lack any road routes whatsoever. The north-east is also almost without roads, except for the Dalton Highway – better known locally as "The Haul Road" – which is a crude and uninhabited route for most of its length.

There has been discussion for decades about the possibility of building a road to Nome, which is the biggest settlement in western Alaska. However, such a road remains a pipe dream. It's also impossible to drive to Barrow, which is the biggest place in northern Alaska, and also the northernmost city in North America. This is despite Barrow being only about 300km west of the Dalton Highway's terminus at the Prudhoe Bay oilfields.

Road building is a trouble-fraught enterprise in Alaska, where distances are vast, population centres are few (or none), and geography / climate is harsh. In particular, building roads on permafrost (of which much of Alaska's terrain is) can be challenging, because the frozen soil expands in summer, and violently cracks whatever is on top of it. Also, while solid in winter, permafrost turns to muddy swamps in summer.

Alaska's Yukon River frozen solid in winter.
Alaska's Yukon River frozen solid in winter.
Image source: Yukon Animator.

It's no wonder, then, that for most of the far-flung outposts in northern and western Alaska, the main forms of transport are by sea or air. Where terrestrial transport is taken, it's most commonly in the form of a dog sled, and remains so to this day. In winter, Alaska's true main highways are its frozen rivers – particularly the mighty Yukon – which have been traversed by sled dogs, on foot (often with fatal results), and even by bicycle.

Canada

Much like its neighbour Alaska, northern Canada is also a vast expanse of frozen tundra that remains largely uninhabited. Considering the enormity of the area in question, Canada has actually made quite impressive progress in the building of roads further north. However, as the map illustrates, much of the north remains pristine and unblemished.

Map of Canada: areas without roads (approximate) are highlighted in red.
Map of Canada: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

The biggest chunk of the roadless Canadian north is the territory of Nunavut. There are no roads to Nunavut (unless you count this), nor are there any linking its far-flung towns. As its Wikipedia page states, Nunavut is "the newest, largest, northernmost, and least populous territory of Canada". Nunavut's principal settlements of Iqaluit, Rankin Inlet, and Cambridge Bay can only be reached by sea or air.

Fun in the sun in Iqaluit, Nunavut.
Fun in the sun in Iqaluit, Nunavut.
Image source: Huffington Post.

The Northwest Territories is barren in many places, too. The entire eastern pocket of the Territories, bordering Nunavut and Saskatchewan (i.e. everything east of Tibbitt Lake, where the Ingraham Trail ends), has not a single road. And there are no roads north of Wrigley (where the Mackenzie Highway ends), except for the northernmost section of the Dempster Highway up to Inuvik. There are also no roads north of the Dempster Highway in Yukon Territory. And, on the other side of Canada, there are no roads in Quebec or Labrador north of Caniapiscau and the Trans-Taiga Road.

Greenland

Continuing east around the Arctic Circle, we come to Greenland, which is the largest contiguous and permanently-inhabited land mass in the world to have no roads at all between its settlements. Greenland is also the world's largest island.

Map of Greenland: areas without roads (approximate) are highlighted in red.
Map of Greenland: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

The reason for the lack of road connections is the Greenland ice sheet, the second-largest body of ice in the world (after the Antarctic ice sheet), which covers 81% of the territory's surface. (And, as such, the answer to the age-old question "Is Greenland really green?", is overall "No!"). The only way to travel between Greenland's towns year-round is by air, with sea travel being possible only in summer, and dog sledding only in winter.

Svalbard

I'm generally avoiding covering small islands in this article, and am instead focusing on large continental areas. However, Svalbard (with Spitsbergen actually being the main island) is the largest island in the world – apart from islands that fall within other areas covered in this article – that has no roads between any of its settlements.

Map of Svalbard: areas without roads (approximate) are highlighted in red.
Map of Svalbard: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Svalbard is a Norwegian territory situated well north of the Arctic Circle. Its capital, Longyearbyen, is the northernmost city in the world. There are no roads linking Svalbard's handful of towns. Travel options involve air, sea, or snow.

Siberia

The largest geographical region of the world's largest country (Russia), and well known as a vast "frozen wasteland", it should come as no surprise that Siberia features in this article. Siberia is the last of the arctic areas that I'll be covering here (indeed, if you continue east, you'll get back to Alaska, where I started). Consisting primarily of vast tracts of taiga and tundra, Siberia – particularly further north and further east – is a sparsely inhabited land of extreme cold and remoteness.

Map of Siberia: areas without roads (approximate) are highlighted in red.
Map of Siberia: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Considering the size, the emptiness, and the often challenging terrain, Russia has actually made quite impressive achievements in building transport routes through Siberia. Starting with the late days of the Russian Empire, going strong throughout much of the Soviet Era, and continuing with the present-day Russian Federation, civilisation has slowly but surely made inroads (no pun intended) into the region.

North-west Russia – which actually isn't part of Siberia, being west of the Ural Mountains – is quite well-serviced by roads these days. There are two modern roads going north all the way to the Barents Sea: the R21 Highway to Murmansk, and the M8 route to Arkhangelsk. Further east, closer to the start of Siberia proper, there's a road up to Vorkuta, but it's apparently quite crude.

Crossing the Urals east into Siberia proper, Yamalo-Nenets has until fairly recently been quite lacking in land routes, particularly north of the capital Salekhard. However, that has changed dramatically of late in the remote (and sparsely inhabited) Yamal Peninsula, where there is still no proper road, but where the brand-new Obskaya-Bovanenkovo Railroad is operating. Purpose-built for the exploitation of what is believed to be the world's largest natural gas field, this is now the northernmost railway line in the world (and it's due to be extended even further north). Further up from Yamalo-Nenets, the islands of Novaya Zemlya are without land routes.

How do you get around Siberia if there are no roads? Try a reindeer sled!
How do you get around Siberia if there are no roads? Try a reindeer sled!
Image source: The Washington Post.

East of the Yamal Peninsula, the great roadless expanse of northern Siberia begins. In Krasnoyarsk Krai, the second-biggest geographical division in Russia, there are no real roads past the area more than a few hundred km's north of the city of Krasnoyarsk. Nothing, that is, except for the road and railway (until recently the world's northernmost) that reach the northern outpost of Norilsk; although neither road nor rail are properly connected to the rest of Russia.

The Sakha Republic, Russia's biggest geographical division, is completely roadless save for the main highway passing through its south-east corner, and its capital, Yakutsk. In the far north of Sakha, on the shores of the Arctic Ocean, the town of Tiksi is reckoned to be the most remote settlement in all of Russia. Here in the depths of Siberia, the main transport is via the region's mighty rivers, of which the Lena forms the backbone of Sakha. In winter, dog sleds and ice vehicles are the norm.

In the extreme north-east of Siberia, where Chukotka and the Kamchatka Peninsula can be found, there are no road routes whatsoever. Transport in these areas is solely by sea, air, or ice. The only road that comes close to these areas is the Kolyma Highway, also infamously known as the Road of Bones; this route has been improved in recent years, although it's still hair-raising for much of its length, and it's still one of the most remote highways in the world. There are also no roads to Okhotsk (which was the first and the only Russian settlement on the Pacific coast for many centuries), nor to anywhere else in northern Khabarovsk Krai.

Tibet

A part of the People's Republic of China (whether they like it or not) since 1951, Tibet has come a long way since the old days, when the entire kingdom did not have a single road. Today, China claims that 70% of all villages in Tibet are connected by road. And things have also stepped up quite a big notch, since the 2006 opening of the Trans-Tibetan Railway, which is a marvel of modern engineering, and is (at over 5,000m in one section) the new highest-altitude railway in the world.

Map of Tibet: areas without roads (approximate) are highlighted in red.
Map of Tibet: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Virtually the entire southern half of Tibet boasts a road network today. However, the central north area of the region – as well as a fair bit of adjacent terrain in neighbouring Xinjiang and Qinghai provinces – still appears to be without roads. This area also looks like it's devoid of any significant settlements, with nothing much around except high-altitude tundra.

Sahara

Leaving the (mostly icy) roadless realms of North America and Eurasia behind us, it's time to turn our attention southward, where the regions in question are more of a mixed bag. First up: the Sahara, the world's largest desert, which covers almost all of northern Africa, and which is virtually uninhabited save for its semi-arid fringes.

Map of the Sahara: areas without roads (approximate) are highlighted in red.
Map of the Sahara: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

As the map shows, most of the parched interior of the Sahara is without roads. This includes: north-eastern Mauritania, northern Mali, south-eastern and south-western Algeria, northern Niger, southern Libya, northern Chad, north-west Sudan, and south-west Egypt. For all of the above, the only access is by air, by well-equipped 4WD convoy, or by camel caravan.

The only proper road cutting through this whole area, is the optimistically named Trans-Sahara Highway, the key part of which is the crossing from Agadez, Niger, north to Tamanrasset, Algeria. However, although most of the overall route (going all the way from Nigeria to Algeria) is paved, the section north of Agadez is still largely just a rough track through the sand, with occasional signposts indicating the way. There is also a rough track from Mali to Algeria (heading north from Kidal), but it appears to not be a proper road, even by Saharan standards.

The classic way to cross the Sahara: with the help of the planet's best desert survivors.
The classic way to cross the Sahara: with the help of the planet's best desert survivors.
Image source: Found the World.

I should also state (the obvious) here, namely that Saharan and Sub-Saharan Africa is not only one of the most arid and sparsely populated places on Earth, but that it's also one of the poorest, least developed, and most politically unstable places on Earth. As such, it should come as no surprise that overland travel through most of the roadless area is currently strongly discouraged, due to the security situation in many of the listed countries.

Australia

No run-down of the world's great roadless areas would be complete without including my sunburnt homeland, Australia. As I've blogged about before, there's a whole lot of nothing in the middle of this joint.

Map of Australia: areas without roads (approximate) are highlighted in red.
Map of Australia: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

The biggest area of Australia to still lack road routes, is the heart of the Outback, in particular most of the east of Western Australia, and neighbouring land in the Northern Territory and South Australia. This area is bisected by only a single half-decent route (running east-west), the Outback Way – a route that is almost entirely unsealed – resulting in a north and a south chunk of roadless expanse.

The north chunk is centred on the Gibson Desert, and also includes large parts of the Great Sandy Desert and the Tanami Desert. The Gibson Desert, in particular, is considered to be the most remote place in Australia, and this is evidenced by its being where the last uncontacted Aboriginal tribe was discovered – those fellas didn't come outta the bush there 'til 1984. The south chunk consists of the Great Victoria Desert, which is the largest single desert in Australia, and which is similarly remote.

After that comes the area of Lake Eyre – Australia's biggest lake and, in typical Aussie style, one that seldom has any water in it – and the Simpson Desert to its north. The closest road to Lake Eyre itself is the Oodnadatta Track, and the only road that skirts the edge of the Simpson Desert is the Plenty Highway (which is actually part of the Outback Way mentioned above).

Close to Lake Eyre: not much around for miles.
Close to Lake Eyre: not much around for miles.
Image source: Avalook.

On the Apple Isle of Tasmania, the entire south-west region is an uninhabited, pristine, climatically extreme wilderness, and it's devoid of any roads at all. The only access is by sea or air: even 4WD is not an option in this hilly and forested area. Tasmania's famous South Coast Track bushwalk begins at the outpost of Melaleuca, where there is nothing except a small airstrip, and which can effectively only be reached by light aircraft. Not a trip for the faint-hearted.

Finally, in the extreme north-east of Australia, Cape York Peninsula remains one of the least accessible places on the continent, and has almost no roads (particularly the closer you get to the tip). The Peninsula Development Road, going as far north as Weipa, is the only proper road in the area: it's still largely unsealed, and like all other roads in the area, is closed and/or impassable for much of the year due to flooding. Up from there, Bamaga Road and the road to the tip are little more than rough tracks, and are only navigable by experienced 4WD'ers for a few months of the year. In this neck of the woods, you'll find that crocodiles, mosquitoes, jellyfish, and mud are much more common than roads.

New Zealand

Heading east across the ditch, we come to the Land of the Long White Cloud. The South Island of New Zealand is well-known for its dazzling natural scenery: crystal-clear rivers, snow-capped mountains, jutting fjords, mammoth glaciers, and rolling hills. However, all that doesn't make for areas that are particularly easy to live in, or to construct roads through.

Map of New Zealand (South Island): areas without roads (approximate) are highlighted in red.
Map of New Zealand (South Island): areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Essentially, the entire south-west edge of NZ's South Island is without road access. In particular, all of Fiordland National Park: the whole area south of Milford Sound, and east of Te Anau. Same deal for Mount Aspiring National Park, between Milford Sound and Jackson Bay. The only exception is Milford Sound itself, which can be accessed via the famous Homer Tunnel, an engineering feat that pierces the walls of Fiordland against all odds.

Chile

You'd think that, being such a long and thin country, getting at least one road to traverse the entire length of Chile wouldn't be so hard. Think again. Chile is the world's longest north-south country, and spans a wide range of climatic zones, from hot dry desert in the north, to glacial fjord-land in the extreme south. If you've seen Chile desde Arica hasta Punta Arenas (as I have!), then you've witnessed first-hand the geographical variety that it has to offer.

Map of Chile (far south): areas without roads (approximate) are highlighted in red.
Map of Chile (far south): areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Roads can be found in all of Chile, except for one area: in the far south, between Villa O'Higgins and Torres del Paine. That is, the southern-most portion of Región de Aysén, and the northern half of Región de Magallanes, are entirely devoid of roads. This is mainly on account of the Southern Patagonian Ice Field, one of the world's largest chunks of ice outside of the polar regions. This ice field was only traversed on foot, for the first time in history, as recently as 1998; it truly is one of our planet's final unconquered frontiers. No road will be crossing it anytime soon.

Majestic Glaciar O'Higgins, at the northern end of the Southern Patagonian Ice Field.
Majestic Glaciar O'Higgins, at the northern end of the Southern Patagonian Ice Field.
Image source: Mallin Colorado.

Much of the Chilean side of the island of Tierra del Fuego is also without roads: this includes most of Parque Karukinka, and all of Parque Nacional Alberto de Agostini.

The Chilean government has for many decades maintained the monumental effort of extending the Carretera Austral ever further south. The route reached its current terminus at Villa O'Higgins in 2000. The ultimate aim, of course, is to connect isolated Magallanes (which to this day can only be reached by road via Argentina) with the rest of the country. But considering all the ice, fjords, and extreme conditions in the way, it might be some time off yet.

Amazon

We now come to what is by far the most lush and life-filled place in this article: the Amazon Basin. Spanning several South American countries, the Amazon is home to the world's largest river (by water volume) and river system. Although there are some big settlements in the area (including Iquitos, the world's biggest city that's not accessible by road), in general there are more piranhas and anacondas than there are people in this jungle (the piranhas help to keep it that way!).

Map of the Amazon Basin: areas without roads (approximate) are highlighted in red.
Map of the Amazon Basin: areas without roads (approximate) are highlighted in red.
Image source: Google Earth (red highlighting by yours truly).

Considering the challenges, quite a number of roads have actually been built in the Amazon in recent decades, particularly in the Brazilian part. The best-known of these is the Transamazônica, which – although rough and muddy as can be – has connected vast swaths of the jungle to civilisation. It should also be noted, that extensive road-building in this area is not necessarily a good thing: publicly-available satellite imagery clearly illustrates that, of the 20% of the Amazon that has been deforested to date, much of it has happened alongside roads.

The main parts of the Amazon that remain completely without roads are: western Estado do Amazonas and northern Estado do Pará in Brazil; most of north-eastern Peru (Departamento de Loreto); most of eastern Ecuador (the provinces in the Región amazónica del Ecuador); most of south-eastern Colombia (Amazonas, Vaupés, Guainía, Caquetá, and Guaviare departamentos); southern Venezuela (Estado de Amazonas); and the southern part of all the Guyanas (British Guyana, Suriname, and French Guiana).

Iquitos, Peru: a river city like no other.
Iquitos, Peru: a river city like no other.
Image source: Getty Images.

The Amazon Basin probably already has more roads than it needs (or wants). In this part of the world, the rivers are the real highways – especially the Amazon itself, which has heavy marine traffic, despite being more than 5km wide in many parts (and that's in the dry season!). In fact, it's hard for terrestrial roads to compete with the rivers: for example, the BR-319 to Manaus has been virtually washed away by the jungle, and the main access to the Amazon's biggest city remains by boat.

Antarctica

It's the world's fifth-largest continent. It's completely covered in ice. It has no permanent human population. It has no countries or (proper) territories. And it has no roads. And none of this should be a surprise to anyone!

Map of Antarctica: areas without roads (approximate) are highlighted in red.
Map of Antarctica: areas without roads (approximate) are highlighted in red.
Image source: Wikimedia Commons (red highlighting by yours truly).

As you might have guessed, not only are there no roads linking anywhere to anywhere else within Antarctica (except for ice trails), but (unlike every other area covered in this article) there aren't even any local roads within Antarctic settlements. The only regular access to Antarctica, and around Antarctica, is by air; even access by ship is difficult without helicopter support.

Where to?

There you have it: an overview of some of the most forlorn, desolate, but also beautiful places in the world, where the wonders of roads have never in all of human history been built. I've tried to cover as many relevant places as I can (and I've certainly covered more than I originally intended to), but of course I couldn't ever cover all of them. As I said, I've avoided discussion of islands, as a general rule, mainly because there is a colossal number of roadless islands around, and the list could go on forever.

I hope you've found this spin around the globe informative. And don't let such a minor inconvenience as a lack of roads stop you from visiting as many of these places as you can! Comments and feedback welcome.

]]>
Running a real Windows install in VirtualBox on Linux https://greenash.net.au/thoughts/2016/02/running-a-real-windows-install-in-virtualbox-on-linux/ Mon, 01 Feb 2016 00:00:00 +0000 https://greenash.net.au/thoughts/2016/02/running-a-real-windows-install-in-virtualbox-on-linux/ Having a complete Windows (or Mac) desktop running within Linux has been possible for some time now, thanks to the wonders of Virtual Machine (VM) technology. However, the typical approach is to mount and boot a VM image, where the guest OS and hard disk are just files on the host filesystem. In this case, the guest OS can't be natively booted and run, because it doesn't occupy its own disk or partition on the physical hardware, and therefore it can't be picked up by the BIOS / boot manager.

I've been installing Windows and Linux on the same machine, in a dual-boot setup, for many years now. In this case, I boot natively into either one or the other of the installed OSes. However, I haven't run one "real" OS (i.e. an OS that's installed on a physical disk or partition) inside the other via a VM. At least, not until now.

At my new job this year, I discovered that it's possible to do such a thing, using a feature of VirtualBox called "Raw Disk Access". With surprisingly few hiccups, I got this running with Linux Mint 17.3 as the host, and with Windows 8.1 as the guest. Each OS is installed on a separate physical hard disk. I run Windows inside the VM most of the time, but I can still boot natively into the very same install of Windows at any time, if necessary.

Instructions

  1. This should go without saying, but please back up all your data before proceeding. What I'm explaining here is dangerous, and if anything goes wrong, you are likely to lose data on your PC.
  2. If installing the two OSes on the same physical disk, then wipe the disk and create partitions for each OS as necessary (as is standard for dual-boot installs). (You can also shrink an existing Windows partition and then create the Linux partitions with the resulting free space, but this is more dangerous). If installing on different physical disks, then just keep reading.
  3. Install Windows on its respective disk or partition (if it's not installed already, e.g. included with a home PC, SOE configured copy on a corporate PC). Windows should boot by default.
  4. Go into your PC's BIOS setup (e.g. by pressing F12 when booting up), and ensure that "Secure Boot" and "Fast Boot" are disabled (if present), and ensure that "Launch CSM" / "Launch PXE OpROM" (or similar) are enabled (if present).
  5. Install your preferred flavour of Linux on the other disk or partition. After doing this, GRUB should boot on startup, and it should let you choose to load Windows or Linux.
  6. Install VirtualBox on Debian-based systems (e.g. Mint, Ubuntu) with:
    sudo apt-get install virtualbox
    sudo apt-get install virtualbox-dkms
    
  7. Use a tool such as fdisk or parted to determine the partitions that the VM will need to access. In my case, for my Windows disk, it was partitions 1 (boot / EFI), 4 (recovery), and 5 (OS / "C drive").
    1. Partition table of my Windows disk as shown in GParted.
      Partition table of my Windows disk as shown in GParted.

      1. Use this command (with your own filename / disk / partitions specified) to create the "raw disk", which is effectively a file that acts as a pointer to a disk / partition on which an OS is installed:
        sudo VBoxManage internalcommands createrawvmdk \
        -filename "/path/to/win8.vmdk" -rawdisk /dev/sda \
        -partitions 1,4,5
        
      2. Create a new VM in the VirtualBox GUI, with the OS and version that correspond to your install of Windows. In the "Storage" settings for the VM, add a hard disk (when prompted, click "Choose existing disk"), and point it to the .vmdk file that you created.
        1. VirtualBox treats the
          VirtualBox treats the "raw" .vmdk file as if it were a virtual disk contained in a file.

          1. Start up your VM. You should see the same desktop that you have when you boot Windows natively!
          2. Install VirtualBox Guest Additions as you would for a normal Windows VM, in order to get the usual VM bells and whistles (i.e. resizable window, mouse / clipboard integration, etc).
          3. After you've been running your "real" Windows in the VM for a while, it will ask you to "Activate Windows". It will do this even if your Windows install is already activated when running natively. This is because Windows sees itself running within the VM, and sees "different hardware" (i.e. it thinks it's been installed on a second physical machine). You will have to activate Windows a second time within the VM (e.g. using a corporate bulk license key, by calling Microsoft, etc).

          Done

          That's all there is to it. I should acknowledge that this guide is based on various other guides with similar instructions. Most online sources seem to very strongly warn that running Windows in this way is dangerous and can corrupt your system. Personally, I've now been running "raw" Windows in a VM like this every day for several weeks, with no major issues. The VM does crash sometimes (once every few days for me), as VMs do, and as Windows does. But nothing more serious than that.

          I guess I should also warn readers of the potential dangers of this setup. It worked for me, but YMMV. I've also heard rumour that on Windows 8 and higher, the problems of Windows not being able to adapt itself to boot on "different hardware" each startup (the real physical hardware, vs the hardware presented by VirtualBox) are much less than they used to be. Certainly doesn't seem to be an issue for me.

          At any rate, I'm now happy; at least, as happy as someone who runs Windows in a VM all day can physically be. Hey, at least it's Linux outside that box on my screen. Good luck in having your cake and eating it, too.

]]>
Introducing Flask Editable Site https://greenash.net.au/thoughts/2015/10/introducing-flask-editable-site/ Tue, 27 Oct 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/10/introducing-flask-editable-site/ I'd like to humbly present Flask Editable Site, a template for building a small marketing web site in Flask where all content is live editable. Here's a demo of the app in action.

Text and image block editing with Flask Editable Site.
Text and image block editing with Flask Editable Site.

The aim of this app is to demonstrate that, with the help of modern JS libraries, and with some well-thought-out server-side snippets, it's now perfectly possible to "bake in" live in-place editing for virtually every content element in a typical brochureware site.

This app is not a CMS. On the contrary, think of it as a proof-of-concept alternative to a CMS. An alternative where there's no "admin area", there's no "editing mode", and there's no "preview button". There's only direct manipulation.

"Template" means that this is a sample app. It comes with a bunch of models that work out-of-the-box (e.g. text content block, image content block, gallery item, event). However, these are just a starting point: you can and should define your own models when building a real site. Same with the front-end templates: the home page layout and the CSS styles are just examples.

About that "template" idea

I can't stress enough that this is not a CMS. There are of course plenty of CMSes out there already, in Python and in every other language under the sun. Several of those CMSes I have used extensively. I've even been paid to build web sites with them, for most of my professional life so far. I desire neither to add to that list, nor to take on the heavy maintenance burden that doing so would entail.

What I have discovered as a web developer, and what I'm sure that all web developers discover sooner or later, is that there's no such thing as the perfect CMS. Possibly, there isn't even such thing as a good CMS! If you want to build a web site with a content management experience that's highly tailored to the project in question, then really, you have to build a unique custom CMS just for that site. Deride me as a perfectionist if you want, but that's my opinion.

There is such a thing as a good framework. Flask Editable Site, as its name suggests, uses the Flask framework, which has the glorious honour of being my favourite framework these days. And there is definitely such a thing as a good library. Flask Editable Site uses a number of both front-end and back-end libraries. The best libraries can be easily mashed up together in different configurations, on top of different frameworks, to help power a variety of different apps.

Flask Editable Site is not a CMS. It's a sample app, which is a template for building a unique CMS-like app tailor-made for a given project. If you're doing it right, then no two projects based on Flask Editable Site will be the same app. Every project has at least slightly different data models, users / permissions, custom forms, front-end widgets, and so on.

So, there's the practical aim of demonstrating direct manipulation / live editing. However, Flask Editable Site has a philosophical aim, too. The traditional "building a super one-size-fits-all app to power 90% of sites" approach isn't necessarily a good one. You inevitably end up fighting the super-app, and hacking around things to make it work for you. Instead, how about "building and sharing a template for making each site its own tailored app"? How about accepting that "every site is a hack", and embracing that instead of fighting it?

Thanks and acknowledgements

Thanks to all the libraries that Flask Editable Site uses; in each case, I tried to choose the best library available at the present time, for achieving a given purpose:

  • Dante contenteditable WYSIWYG editor, a Medium editor clone. I had previously used MediumEditor, and I recommend it too, but I feel that Dante gives a more polished out-of-the-box experience for now. I think the folks at Medium have done a great job in setting the bar high for beautiful rich-text editing, which is an important part of the admin experience for many web sites / apps.
  • Dropzone.js image upload widget. C'mon, people, it's 2015. Death to HTML file fields for uploads. Drag and drop with image preview, bring it on. From my limited research, Dropzone.js seems to be the clear leader of this pack at the moment.
  • Bootstrap datetimepicker for calendar picker and hour/minute selector.
  • Bootstrap 3 for pretty CSS styles and grid layouts. I admit I've become a bit of a Bootstrap addict lately. For developers with non-existent artistic ability, like myself, it's impossible to resist. Font Awesome is rather nice, too.
  • Markovify for random text generation. I discovered this one (and several alternative implementations of it) while building Flask Editable Site, and I'm hooked. Adios, Lorem Ipsum, and don't hit the door on your way out.
  • Bootstrap Freelancer theme by Start Bootstrap. Although Flask Editable Site uses vanilla Bootstrap, I borrowed various snippets of CSS / JS from this theme, as well as the overall layout.
  • cookiecutter-flask, a Flask app template. I highly recommend this as a guide to best-practice directory layout, configuration management, and use of patterns in a Flask app. Thanks to these best practices, Flask Editable Site is also reasonably Twelve-Factor compliant, especially in terms of config and backing services.

Flask Editable Site began as the codebase for The Daydream Believers Performers web site, which I built pro-bono as a side project recently. So, acknowledgements to that group for helping to make Flask Editable Site happen.

For the live editing UX, I acknowledge that I drew inspiration from several examples. First and foremost, from Mezzanine, a CMS (based on Django) which I've used on occasion. Mezzanine puts "edit" buttons in-place next to most text fields on a site, and pops up a traditional (i.e. non contenteditable) WYSIWYG editor when these are clicked.

I also had a peek at Create.js, which takes care of the front-end side of live content editing quite similarly to the way I've cobbled it together. In Flask Editable Site, the combo of Dante editor and my custom "autosave" JS could easily be replaced with Create.js (particularly when using Hallo editor, which is quite minimalist like Dante); I guess it's just a question of personal taste.

Sir Trevor JS is an interesting new kid on the block. I'm quite impressed with Sir Trevor, but its philosophy of "adding blocks of anything down the page" isn't such a great fit for Flask Editable Site, where the idea is that site admins can only add / edit content within specific constraints for each block on the page. However, for sites with no structured content models, where it's OK for each page to be a free canvas (or for a "free canvas" within, say, each blog post on a site), I can see Sir Trevor being a real game-changer.

There's also X-editable, which is the only JS solution that I've come across for nice live editing of list-type content (i.e. checkoxes, radio buttons, tag fields, autocomplete boxes, etc). I haven't used X-editable in Flask Editable Site, because I'm mainly dealing with text and image fields (and for date / time fields, I prefer a proper calendar widget). But if I needed live editing of list fields, X-editable would be my first choice.

Final thoughts

I must stress that, as I said above, Flask Editable site is a proof-of-concept. It doesn't have all the features you're going to need for your project foo. In particular, it doesn't support very many field types: only text ("short text" and "rich text"), date, time, and image. It should also support inline images and (YouTube / Vimeo) videos out-of-the-box, as this is included with Dante, but I haven't tested it. For other field types, forks / pull requests / sister projects are welcome.

If you look at the code (particularly the settings.py file and the home view), you should be able to add live editing of new content models quite easily, with just a bit of copy-pasting and tweaking. The idea is that the editable.views code is generic enough, that you won't need to change it at all when adding new models / fields in your back-end. At least, that's the idea.

Quite a lot of the code in Flask Editable Site is more complex than it strictly needs to be, in order to support "session store mode", where all content is saved to the current user's session instead of to the database (preferably using something like Memcached or temp files, rather than cookies, although that depends on what settings you use). I developed "session store mode" in order to make the demo site work without requiring any hackery such as a scheduled DB refresh (which is the usual solution in such cases). However, I can see it also being useful for sandbox environments, for UAT, and for reviewing design / functionality changes without "real" content getting in the way.

The app also includes a fair bit of code for random generation and selection of sample text and image content. This was also done primarily for the purposes of the demo site. But, upon reflection, I think that a robust solution for randomly populating a site's content is really something that all CMS-like apps should consider more seriously. The exact algorithms and sample content pools for this, of course, are a matter of taste. But the point is that it's not just about pretty pictures and amusing Dickensian text. It's about the mindset of treating content dynamically, and of recognising the bounds and the parameters of each placeholder area on the page. And what better way to enforce that mindset, than by seeing a different random set of content every time you restart the app?

I decided to make this project a good opportunity for getting my hands dirty with thorough unit / functional testing. As such, Flask Editable Site is my first open-source effort that features automated testing via Travis CI, as well as test coverage reporting via Coveralls. As you can see on the GitHub page, tests are passing and coverage is pretty good. The tests are written in pytest, with significant help from webtest, too. I hope that the tests also serve as a template for other projects; all too often, with small brochureware sites, formal testing is done sparingly if at all.

Regarding the "no admin area" principle, Flask Editable Site has taken quite a purist approach to this. Personally, I think that radically reducing the role of "admin areas" in web site administration will lead to better UX. Anything that's publicly visible on the site, should be editable first and foremost via direct manipulation. However, in reality there will always be things that aren't publicly visible, and that admins still need to edit. For example, sites will always need user / role CRUD pages (unless you're happy to only manage users via shell commands). So, if you do add admin pages to a project based on Flask Editable Site, please don't feel as though you're breaking some golden rule.

Hope you enjoy playing around with the app. Who knows, maybe you'll even build something useful based on it. Feedback, bug reports, pull requests, all welcome.

]]>
Cookies can't be more than 4KiB in size https://greenash.net.au/thoughts/2015/10/cookies-cant-be-more-than-4kib-in-size/ Thu, 15 Oct 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/10/cookies-cant-be-more-than-4kib-in-size/ Did you know: you can't reliably store more than 4KiB (4096 bytes) of data in a single browser cookie? I didn't until this week.

What, I can't have my giant cookie and eat it too? Outrageous!
What, I can't have my giant cookie and eat it too? Outrageous!
Image source: Giant Chocolate chip cookie recipe.

I'd never before stopped to think about whether or not there was a limit to how much you can put in a cookie. Usually, cookies only store very small string values, such as a session ID, a tracking code, or a browsing preference (e.g. "tile" or "list" for search results). So, usually, there's no need to consider its size limits.

However, while working on a new side project of mine that heavily uses session storage, I discovered this limit the hard (to debug) way. Anyway, now I've got one more adage to add to my developer's phrasebook: if you're trying to store more than 4KiB in a cookie, you're doing it wrong.

Actually, according to the web site Browser Cookie Limits, the safe "lowest common denominator" maximum size to stay below is 4093 bytes. Also check out the Stack Overflow discussion, What is the maximum size of a web browser's cookie's key?, for more commentary regarding the limit.

In my case – working with Flask, which depends on Werkzeug – trying to store an oversized cookie doesn't throw any errors, it simply fails silently. I've submitted a patch to Werkzeug, to make oversized cookies raise an exception, so hopefully it will be more obvious in future when this problem occurs.

It appears that this is not an isolated issue; many web frameworks and libraries fail silently with storage of too-big cookies. It's the case with Django, where the decision was made to not fix it, for technical reasons. Same story with CodeIgniter. Seems that Ruby on Rails is well-behaved and raises exceptions. Basically, your mileage may vary: don't count on your framework of choice alerting you, if you're being a cookie monster.

Also, as several others have pointed out, trying to store too much data in cookies is a bad idea anyway, because that data travels with every HTTP request and response, so it should be as small as possible. As I learned, if you find that you're dealing with non-trivial amounts of session data, then ditch client-side storage for the app in question, and switch to server-side session data storage (preferably using something like Memcached or Redis).

]]>
Robert Dawson: the first anthropologist of Aborigines? https://greenash.net.au/thoughts/2015/09/robert-dawson-the-first-anthropologist-of-aborigines/ Sat, 26 Sep 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/09/robert-dawson-the-first-anthropologist-of-aborigines/ The treatment of Aboriginal Australians in colonial times was generally atrocious. This is now well known and accepted by most. Until well into the 20th century, Aborigines were subjected to exploitation, abuse, and cold-blooded murder. They were regarded as sub-human, and they were not recognised at all as the traditional owners of their lands. For a long time, virtually no serious attempts were made to study or to understand their customs, their beliefs, and their languages. On the contrary, the focus was on "civilising" them by imposing upon them a European way of life, while their own lifestyle was held in contempt as "savage".

I recently came across a gem of literary work, from the early days of New South Wales: The Present State of Australia, by Robert Dawson. The author spent several years (1826-1828) living in the Port Stephens area (about 200km north of Sydney), as chief agent of the Australian Agricultural Company, where he was tasked with establishing a grazing property. During his time there, Dawson lived side-by-side with the Worimi indigenous peoples, and Worimi anecdotes form a significant part of his book (which, officially, is focused on practical advice for British people considering migration to the Australian frontier).

Robert Dawson of the Australian Agricultural Company.
Robert Dawson of the Australian Agricultural Company.
Image source: Wikimedia Commons.

In this article, I'd like to share quite a number of quotes from Dawson's book, which in my opinion may well constitute the oldest known (albeit informal) anthropological study of Indigenous Australians. Considering his rich account of Aboriginal tribal life, I find it surprising that Dawson seems to have been largely forgotten by the history books, and that The Present State of Australia has never been re-published since its first edition in 1830 (the copies produced in 1987 are just fascimiles of the original). I hope that this article serves as a tribute to someone who was an exemplary exception to what was then the norm.

Language

The book includes many passages containing Aboriginal words interspersed with English, as well as English words spelt phonetically (and amusingly) as the tribespeople pronounced them; contemporary Australians should find many of these examples familiar, from the modern-day Aboriginal accents:

Before I left Port Stephens, I intimated to them that I should soon return in a "corbon" (large) ship, with a "murry" (great) plenty of white people, and murry tousand things for them to eat … They promised to get me "murry tousand bark." "Oh! plenty bark, massa." "Plenty black pellow, massa: get plenty bark." "Tree, pour, pive nangry" (three, four, five days) make plenty bark for white pellow, massa." "You come back toon?" "We look out for corbon ship on corbon water," (the sea.) "We tee, (see,) massa." … they sent to inform me that they wished to have a corrobery (dance) if I would allow it.

(page 60)

On occasion, Dawson even goes into grammatical details of the indigenous languages:

"Bael me (I don't) care." The word bael means no, not, or any negative: they frequently say, "Bael we like it;" "Bael dat good;" "Bael me go dere."

(page 65)

It's clear that Dawson himself became quite prolific in the Worimi language, and that – at least for a while – an English-Worimi "creole" emerged as part of white-black dialogue in the Port Stephens area.

Food and water

Although this is probably one of the better-documented Aboriginal traits from the period, I'd also like to note Dawson's accounts of the tribespeoples' fondness for European food, especially for sugar:

They are exceedingly fond of biscuit, bread, or flour, which they knead and bake in the ashes … but the article of food which appears most delicious to them, is the boiled meal of Indian corn; and next to it the corn roasted in the ashes, like chestnuts: of sugar too they are inordinately fond, as well as of everything sweet. One of their greatest treats is to get an Indian bag that has had sugar in it: this they cut into pieces and boil in water. They drink this liquor till they sometimes become intoxicated, and till they are fairly blown out, like an ox in clover, and can take no more.

(page 59)

Dawson also described their manner of eating; his account is not exactly flattering, and he clearly considers this behaviour to be "savage":

The natives always eat (when allowed to do so) till they can go on no longer: they then usually fall asleep on the spot, leaving the remainder of the kangaroo before the fire, to keep it warm. Whenever they awake, which is generally three or four times during the night, they begin eating again; and as long as any food remains they will never stir from the place, unless forced to do so. I was obliged at last to put a stop, when I could, to this sort of gluttony, finding that it incapacitated them from exerting themselves as they were required to do the following day.

(page 123)

Regarding water, Dawson gave a practical description of the Worimi technique for getting a drink in the bush in dry times (and admits that said technique saved him from being up the creek a few times); now, of course, we know that similar techniques were common for virtually all Aboriginal peoples across Australia:

It sometimes happens, in dry seasons, that water is very scarce, particularly near the shores. In such cases, whenever they find a spring, they scratch a hole with their fingers, (the ground being always sandy near the sea,) and suck the water out of the pool through tufts or whisps of grass, in order to avoid dirt or insects. Often have I witnessed and joined in this, and as often felt indebted to them for their example.

They would walk miles rather than drink bad water. Indeed, they were such excellent judges of water, that I always depended upon their selection when we encamped at a distance from a river, and was never disappointed.

(page 150)

Tools and weapons

In numerous sections, Dawson described various tools that the Aborigines used, and their skill and dexterity in fashioning and maintaining them:

[The old man] scraped the point of his spear, which was at least about eight feet long, with a broken shell, and put it in the fire to harden. Having done this, he drew the spear over the blaze of the fire repeatedly, and then placed it between his teeth, in which position he applied both his hands to straighten it, examining it afterwards with one eye closed, as a carpenter would do his planed work. The dexterous and workmanlike manner in which he performed his task, interested me exceedingly; while the savage appearance and attitude of his body, as he sat on the ground before a blazing fire in the forest, with a black youth seated on either side of him, watching attentively his proceedings, formed as fine a picture of savage life as can be conceived.

(page 16)

To the modern reader such as myself, Dawson's use of language (e.g. "a picture of savage life") invariably gives off a whiff of contempt and "European superiority". Personally, I try to give him the benefit of the doubt, and to brush this off as simply "using the vernacular of the time". In my opinion, this is fair justification for Dawson's manner of writing to some extent; but it also shows that he wasn't completely innocent, either: he too held some of the very views which he criticised in his contemporaries.

The tribespeople also exercised great agility in gathering the raw materials for their tools and shelters:

Before a white man can strip the bark beyond his own height, he is obliged to cut down the tree; but a native can go up the smooth branchless stems of the tallest trees, to any height, by cutting notches in the surface large enough only to place the great toe in, upon which he supports himself, while he strips the bark quite round the tree, in lengths from three to six feet. These form temporary sides and coverings for huts of the best description.

(page 19)

And they were quite dexterous in their crafting of nets and other items:

They [the women] make string out of bark with astonishing facility, and as good as you can get in England, by twisting and rolling it in a curious manner with the palm of the hand on the thigh. With this they make nets … These nets are slung by a string round their forehead, and hang down their backs, and are used like a work-bag or reticule. They contain all the articles they carry about with them, such as fishing hooks made from oyster or pearl shells, broken shells, or pieces of glass, when they can get them, to scrape the spears to a thin and sharp point, with prepared bark for string, gum for gluing different parts of their war and fishing spears, and sometime oysters and fish when they move from the shore to the interior.

(page 67)

Music and dance

Dawson wrote fondly of his being witness to corroborees on several occasions, and he recorded valuable details of the song and dance involved:

A man with a woman or two act as musicians, by striking two sticks together, and singing or bawling a song, which I cannot well describe to you; it is chiefly in half tones, extending sometimes very high and loud, and then descending so low as almost to sink to nothing. The dance is exceedingly amusing, but the movement of the limbs is such as no European could perform: it is more like the limbs of a pasteboard harlequin, when set in motion by a string, than any thing else I can think of. They sometimes changes places from apparently indiscriminate positions, and then fall off in pairs; and after this return, with increasing ardour, in a phalanx of four and five deep, keeping up the harlequin-like motion altogether in the best time possible, and making a noise with their lips like "proo, proo, proo;" which changes successively to grunting, like the kangaroo, of which it is an imitation, and not much unlike that of a pig.

(page 61)

Note Dawson's poetic efforts to bring to life the corroboree in words, with "bawling" sounds, "phalanx" movements, and "harlequin-like motion". Modern-day writers probably wouldn't bother to go to such lengths, instead assuming that their audience is familiar with the sights and sounds in question (at the very least, from TV shows). Dawson, who was writing for a Victorian English audience, didn't enjoy this luxury.

Families

In an era when most "white fellas" in the Colony were irrevocably destroying traditional Aboriginal family ties (a practice that was to continue well into the 20th century), Dawson was appreciating and making note of the finer details that he witnessed:

They are remarkably fond of their children, and when the parents die, the children are adopted by the unmarried men and women, and taken the greatest care of.

(page 68)

He also observed the prevalence of monogamy amongst the tribes he encountered:

The husband and wife are in general remarkably constant to each other, and it rarely happens that they separate after having considered themselves as man and wife; and when an elopement or the stealing of another man's gin [wife] takes place, it creates great, and apparently lasting uneasiness in the husband.

(page 154)

As well as the enduring bonds between parents and children:

The parents retain, as long as they live, an influence over their children, whether married or not – I then asked him the reason of this [separating from his partner], and he informed me his mother did not like her, and that she wanted him to choose a better.

(page 315)

Dawson made note of the good and the bad; in the case of families, he condoned the prevalence of domestic violence towards women in the Aboriginal tribes:

On our first coming here, several instances occurred in our sight of the use of this waddy [club] upon their wives … When the woman sees the blow coming, she sometimes holds her head quietly to receive it, much like Punch and his wife in the puppet-shows; but she screams violently, and cries much, after it has been inflicted. I have seen but few gins [wives] here whose heads do not bear the marks of the most dreadful violence of this kind.

(page 66)

Clothing

Some comical accounts of how the Aborigines took to the idea of clothing in the early days:

They are excessively fond of any part of the dress of white people. Sometimes I see them with an old hat on: sometimes with a pair of old shoes, or only one: frequently with an old jacket and hat, without trowsers: or, in short, with any garment, or piece of a garment, that they can get.

(page 75)

They usually reacted well to gifts of garments:

On the following morning I went on board the schooner, and ordered on shore a tomahawk and a suit of slop clothes, which I had promised to my friend Ben, and in which he was immediately dressed. They consisted of a short blue jacket, a checked shirt, and a pair of dark trowsers. He strutted about in them with an air of good-natured importance, declaring that all the harbour and country adjoining belonged to him. "I tumble down [born] pickaninny [child] here," he said, meaning that he was born there. "Belonging to me all about, massa; pose you tit down here, I gib it to you." "Very well," I said: "I shall sit down here." "Budgeree," (very good,) he replied, "I gib it to you;" and we shook hands in ratification of the friendly treaty.

(page 12)

Death and religion

Yet another topic which was scarcely investigated by Dawson's colonial peers – and which we now know to have been of paramount importance in all Aborigines' belief systems — rituals regarding death and mourning:

… when any of their relations die, they show respect for their memories by plastering their heads and faces all over with pipe-clay, which remains till it falls off of itself. The gins [wives] also burn the front of the thigh severely, and bind the wound up with thin strips of bark. This is putting themselves in mourning. We put on black; they put on white: so that it is black and white in both cases.

(page 74)

The Aborigines that Dawson became acquainted with, were convinced that the European settlers were re-incarnations of their ancestors; this belief was later found to be fairly widespread amongst Australia's indigenous peoples:

I cannot learn, precisely, whether they worship any God or not; but they are firm in their belief that their dead friends go to another country; and that they are turned into white men, and return here again.

(page 74)

Dawson appears to have debated this topic at length with the tribespeople:

"When he [the devil] makes black fellow die," I said, "what becomes of him afterwards?" "Go away Englat," (England,) he answered, "den come back white pellow." This idea is so strongly impressed upon their minds, that when they discover any likeness between a white man and any one of their deceased friends, they exclaim immediately, "Dat black pellow good while ago jump up white pellow, den come back again."

(page 158)

Inter-tribe relations

During his time with the Worimi and other tribes, Dawson observed many of the details of how neighbouring tribes interacted, for example, in the case of inter-tribal marriage:

The blacks generally take their wives from other tribes, and if they can find opportunities they steal them, the consent of the female never being made a question in the business. When the neighbouring tribes happen to be in a state of peace with each other, friendly visits are exchanged, at which times the unmarried females are carried off by either party.

(page 153)

In one chapter, Dawson gives an amusing account of how the Worimi slandered and villainised another tribe (the Myall people), with whom they were on unfriendly terms:

The natives who domesticate themselves amongst the white inhabitants, are aware that we hold cannibalism in abhorrence; and in speaking of their enemies, therefore, to us, they always accuse them of this revolting practice, in order, no doubt, to degrade them as much as possible in our eyes; while the other side, in return, throw back the accusation upon them. I have questioned the natives who were so much with me, in the closest manner upon this subject, and although they persist in its being the practice of their enemies, still they never could name any particular instances within their own knowledge, but always ended by saying: "All black pellow been say so, massa." When I have replied, that Myall black fellows accuse them of it also, the answer has been, "Nebber! nebber black pellow belonging to Port Tebens, (Stephens;) murry [very] corbon [big] lie, massa! Myall black pellows patter (eat) always."

(page 125)

The book also explains that the members of a given tribe generally kept within their own ancestral lands, and that they were reluctant and fearful of too-often making contact with neighbouring tribes:

… the two natives who had accompanied them had become frightened at the idea of meeting strange natives, and had run away from them about the middle of their journey …

(page 24)

Relations with colonists

Throughout the book, general comments are made that insinuate the fault and the aggression in general of "white fella" in the Colony:

The natives are a mild and harmless race of savages; and where any mischief has been done by them, the cause has generally arisen, I believe, in bad treatment by their white neighbours. Short as my residence has been here, I have, perhaps, had more intercourse with these people, and more favourable opportunities of seeing what they really are, than any other person in the colony.

(page 57)

Dawson provides a number of specific examples of white aggression towards the Aborigines:

The natives complained to me frequently, that "white pellow" (white fellows) shot their relations and friends; and showed me many orphans, whose parents had fallen by the hands of white men, near this spot. They pointed out one white man, on his coming to beg some provisions for his party up the river Karuah, who, they said, had killed ten; and the wretch did not deny it, but said he would kill them whenever he could.

(page 58)

Sydney

As a modern-day Sydneysider myself, I had a good chuckle reading Dawson's account of his arrival for the first time in Sydney, in 1826:

There had been no arrival at Sydney before us for three or four months. The inhabitants were, therefore, anxious for news. Parties of ladies and gentlemen were parading on the sides of the hills above us, greeting us every now and then, as we floated on; and as soon as we anchored, (which was on a Sunday,) we were boarded by numbers of apparently respectable people, asking for letters and news, as if we had contained the budget of the whole world.

(page 46)

View of Sydney Cove from Dawes Point by Joseph Lycett, ca 1817-1818.
View of Sydney Cove from Dawes Point by Joseph Lycett, ca 1817-1818.
Image source: Wikipedia.

No arrival in Sydney, from the outside world, for "three or four months"?! Who would have thought that a backwater penal town such as this, would one day become a cosmopolitan world city, that sees a jumbo jet land and take off every 5 minutes, every day of the week? Although, it seems that even back then, Dawson foresaw something of Sydney's future:

On every side of the town [Sydney] houses are being erected on new ground; steam engines and distilleries are at work; so that in a short time a city will rise up in this new world equal to any thing out of Europe, and probably superior to any other which was ever created in the same space of time.

(page 47)

And, even back then, there were some (like Dawson) who preferred to get out of the "rat race" of Sydney town:

Since my arrival I have spent a good deal of time in the woods, or bush, as it is called here. For the last five months I have not entered or even seen a house of any kind. My habitation, when at home, has been a tent; and of course it is no better when in the bush.

(page 48)

Stockton Beach, just below Port Stephens, was several years ago declared Worimi land.
Stockton Beach, just below Port Stephens, was several years ago declared Worimi land.
Image source: NSW National Parks.

There's still a fair bit of bush all around Sydney; although, sadly, not as much as there was in Dawson's day.

General remarks

Dawson's impression of the Aborigines:

I was much amused at this meeting, and above all delighted at the prompt and generous manner in which this wild and untutored man conducted himself towards his wandering brother. If they be savages, thought I, they are very civil ones; and with kind treatment we have not only nothing to fear, but a good deal to gain from them. I felt an ardent desire to cultivate their acquaintance, and also much satisfaction from the idea that my situation would afford me ample opportunities and means for doing so.

(page 11)

Nomadic nature of the tribes:

When away from this settlement, they appear to have no fixed place of residence, although they have a district of country which they call theirs, and in some part of which they are always to be found. They have not, as far as I can learn, any king or chief.

(page 63)

Tribal punishment:

I have never heard but of one punishment, which is, I believe, inflicted for all offences. It consists in the culprit standing, for a certain time, to defend himself against the spears which any of the assembled multitude think proper to hurl at him. He has a small target [shield] … and the offender protects himself so dexterously by it, as seldom to receive any injury, although instances have occurred of persons being killed.

(page 64)

Generosity of Aborigines (also illustrating their lack of a concept of ownership / personal property):

They are exceedingly kind and generous towards each other: if I give tobacco or any thing else to any man, it is divided with the first he meets without being asked for it.

(page 68)

Ability to count / reckon:

They have no idea of numbers beyond five, which are reckoned by the fingers. When they wish to express a number, they hold up so many fingers: beyond five they say, "murry tousand," (many thousands.)

(page 75)

Protocol for returning travellers in a tribe:

It is not customary with the natives of Australia to shake hands, or to greet each other in any way when they meet. The person who has been absent and returns to his friends, approaches them with a serious countenance. The party who receives him is the first to speak, and the first questions generally are, where have you been? Where did you sleep last night? How many days have you been travelling? What news have you brought? If a member of the tribe has been very long absent, and returns to his family, he stops when he comes within about ten yards of the fire, and then sits down. A present of food, or a pipe of tobacco is sent to him from the nearest relation. This is given and received without any words passing between them, whilst silence prevails amongst the whole family, who appear to receive the returned relative with as much awe as if he had been dead, and it was his spirit which had returned to them. He remains in this position perhaps for half an hour, till he receives a summons to join his family at the fire, and then the above questions are put to him.

(page 132)

Final thoughts

The following pages are not put forth to gratify the vanity of authorship, but with the view of communicating facts where much misrepresentation has existed, and to rescue, as far as I am able, the character of a race of beings (of whom I believe I have seen more than any other European has done) from the gross misrepresentations and unmerited obloquy that has been cast upon them.

(page xiii)

Dawson wasn't exactly modest, in his assertion of being the foremost person in the Colony to make a fair representation of the Aborigines; however, I'd say his assertion is quite accurate. As far as I know, he does stand out as quite a solitary figure for his time, in his efforts to meaningfully engage with the tribes of the greater Sydney region, and to document them in a thorough and (relatively) unprejudiced work of prose.

I would therefore recommend those who would place the Australian natives on the level of brutes, to reflect well on the nature of man in his untutored state in comparison with his more civilized brother, indulging in endless whims and inconsistencies, before they venture to pass a sentence which a little calm consideration may convince them to be unjust.

(page 152)

Dawson's criticism of the prevailing attitudes was scathing, although it was clearly criticism that was ignored and unheeded by his contemporaries.

It is not sufficient merely as a passing traveller to see an aboriginal people in their woods and forests, to form a just estimate of their real character and capabilities … To know them well it is necessary to see much more of them in their native wilds … In this position I believe no man has ever yet been placed, although that in which I stood approached more nearly to it than any other known in that country.

(page 329)

With statements like this, Dawson is inviting his fellow colonists to "go bush" and to become acquainted with an Aboriginal tribe, as he did. From others' accounts of the era, those who followed in his footsteps were few and far between.

I have seen the natives from the coast far south of Sydney, and thence to Morton Bay (sic), comprising a line of coast six or seven hundred miles; and I have also seen them in the interior of Argyleshire and Bathurst, as well as in the districts of the Hawkesbury, Hunter's River, and Port Stephens, and have no reason whatever to doubt that they are all the same people.

(page 336)

So, why has a man and a book with so much to say about early contact with the Aborigines, lain largely forgotten and abandoned by the fickle sands of history? Probably the biggest reason, is that Dawson was just a common man. Sure, he was the first agent of AACo: but he was no intrepid explorer, like Burke and Wills; nor an important governor, like Arthur Phillip or Lachlan Macquarie. While the diaries and letters of bigwigs like these have been studied and re-published constantly, not everyone can enjoy the historical limelight.

No doubt also a key factor, was that Dawson ultimately fell out badly with the powerful Macarthur family, who were effectively his employers during his time in Port Stephens. The Present State of Australia is riddled with thinly veiled slurs at the Macarthurs, and it's quite likely that this guaranteed the book's not being taken seriously by anyone, in the Colony or elsewhere, for a long time.

Dawson's work is, in my opinion, an outstanding record of indigenous life in Australia, at a time when the ancient customs and beliefs were still alive and visible throughout most of present-day NSW. It also illustrates the human history of a geographically beautiful region that's quite close to my heart. Like many Sydneysiders, I've spent several summer holidays at Port Stephens during my life. I've also been camping countless times at nearby Myall Lakes; and I have some very dear family friends in Booral, a small town which sits alongside the Karuah River just upstream from Port Stephens (and which also falls within Worimi country).

Port Stephens looking East, Tahlee in foreground, Augustus Earle, ca 1827.
Port Stephens looking East, Tahlee in foreground, Augustus Earle, ca 1827.
Image source: State Library of New South Wales.

In leaving as a legacy his narrative of the Worimi people and their neighbours (which is, as far as I know, the only surviving first-hand account of these people from the coloial era of any significance), I believe that Dawson's work should be lauded and celebrated. At a time when the norm for bush settlers was to massacre and to wreak havoc upon indigenous peoples, Dawson instead chose to respect and to make friends with those that he encountered.

Personally, I think the honour of "first anthropologist of the Aborigines" is one that Dawson can rightly claim (although others may feel free to dispute this). Descendants of the Worimi live in the Port Stephens area to this day; and I hope that they appreciate Dawson's tribute, as no doubt the spirits of their ancestors do.

]]>
Splitting a Python codebase into dependencies for fun and profit https://greenash.net.au/thoughts/2015/06/splitting-a-python-codebase-into-dependencies-for-fun-and-profit/ Tue, 30 Jun 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/06/splitting-a-python-codebase-into-dependencies-for-fun-and-profit/ When the Python codebase for a project (let's call the project LasagnaFest) starts getting big, and when you feel the urge to re-use a chunk of code (let's call that chunk foodutils) in multiple places, there are a variety of steps at your disposal. The most obvious step is to move that foodutils code into its own file (thus making it a Python module), and to then import that module wherever else you want in the codebase.

Most of the time, doing that is enough. The Python module importing system is powerful, yet simple and elegant.

But… what happens a few months down the track, when you're working on two new codebases (let's call them TortelliniFest and GnocchiFest – perhaps they're for new clients too), that could also benefit from re-using foodutils from your old project? What happens when you make some changes to foodutils, for the new projects, but those changes would break compatibility with the old LasagnaFest codebase?

What happens when you want to give a super-charged boost to your open source karma, by contributing foodutils to the public domain, but separated from the cruft that ties it to LasagnaFest and Co? And what do you do with secretfoodutils, which for licensing reasons (it contains super-yummy but super-secret sauce) can't be made public, but which should ideally also be separated from the LasagnaFest codebase for easier re-use?

Some bits of Python need to be locked up securely as private dependencies.
Some bits of Python need to be locked up securely as private dependencies.
Image source: Hoedspruit Endangered Species Centre.

Or – not to be forgotten – what happens when, on one abysmally rainy day, you take a step back and audit the LasagnaFest codebase, and realise that it's got no less than 38 different *utils chunks of code strewn around the place, and you ponder whether surely keeping all those utils within the LasagnaFest codebase is really the best way forward?

Moving foodutils to its own module file was a great first step; but it's clear that in this case, a more drastic measure is needed. In this case, it's time to split off foodutils into a separate, independent codebase, and to make it an external dependency of the LasagnaFest project, rather than an internal component of it.

This article is an introduction to the how and the why of cutting up parts of a Python codebase into dependencies. I've just explained a fair bit of the why. As for the how: in a nutshell, pip (for installing dependencies), the public PyPI repo (for hosting open-sourced dependencies), and a private PyPI repo (for hosting proprietary dependencies). Read on for more details.

Levels of modularity

One of the (many) joys of coding in Python is the way that it encourages modularity. For example, let's start with this snippet of completely non-modular code:

foodgreeter.py:

dude_name = 'Johnny'
food_today = 'lasagna'
print("Hey {dude_name}! Want a {food_today} today?".format(
    dude_name=dude_name,
    food_today=food_today))

There are, in my opinion, three different levels of re-factoring that you can apply, in order to make it more modular. You can think of these levels like the layers of a lasagna, if you want. Or not.

Each successive level of re-factoring involves a bit more work in the short-term, but results in more convenient re-use in the long-term. So, which level is appropriate, depends on the likelihood that you (or others) will want to re-use a given chunk of code in the future.

First, you can split the logic out of the procedural blurg, and into a function in the same file:

foodgreeter.py:

def greet_dude_with_food(dude_name, food_today):
    return "Hey {dude_name}! Want a {food_today} today?".format(
        dude_name=dude_name,
        food_today=food_today)

dude_name = 'Johnny'
food_today = 'lasagna'
print(greet_dude_with_food(
    dude_name=dude_name,
    food_today=food_today))

Second, you can move that functionality into a separate file, and import it using Python's module imports system:

foodutils.py:

def greet_dude_with_food(dude_name, food_today):
    return "Hey {dude_name}! Want a {food_today} today?".format(
        dude_name=dude_name,
        food_today=food_today)

foodgreeter.py:

from foodutils import greet_dude_with_food

dude_name = 'Johnny'
food_today = 'lasagna'
print(greet_dude_with_food(
    dude_name=dude_name,
    food_today=food_today))

And, finally, you can move that file out of your codebase, upload it to a Python package repository (the most common such repository being PyPI), and then declare it as a dependency of your codebase using pip:

requirements.txt:

foodutils==1.0.0

Run command:

pip install -r requirements.txt

foodgreeter.py:

from foodutils import greet_dude_with_food

dude_name = 'Johnny'
food_today = 'lasagna'
print(greet_dude_with_food(
    dude_name=dude_name,
    food_today=food_today))

How to keep your building blocks organised.
How to keep your building blocks organised.
Image source: Organize and Decorate Everything.

As I said, achieving this last level of modularity isn't always necessary or appropriate, due to the overhead involved. For a given chunk of code, there are always going to be trade-offs to consider, and as a developer it's always going to be your judgement call.

Splitting out code

For the times when it is appropriate to go that "last mile" and split code out as an external dependency, there are (in my opinion) insufficient resources regarding how to go about it. I hope, therefore, that this section serves as a decent guide on the matter.

Factor out coupling

The first step in making until-now "project code" an external dependency, is removing any coupling that the chunk of code may have to the rest of the codebase. For example, the foodutils code shown above is nice and de-coupled; but what if it instead looked like so:

foodutils.py:

from mysettings import NUM_QUESTION_MARKS

def greet_dude_with_food(dude_name, food_today):
    return "Hey {dude_name}! Want a {food_today} today{q_marks}".format(
        dude_name=dude_name,
        food_today=food_today,
        q_marks='?'*NUM_QUESTION_MARKS)

This would be problematic, because this code relies on the assumption that it lives in a codebase containing a mysettings module, and that the configuration value NUM_QUESTION_MARKS is defined within that module.

We can remove this coupling by changing NUM_QUESTION_MARKS to be a parameter passed to greet_dude_with_food, like so:

foodutils.py:

def greet_dude_with_food(dude_name, food_today, num_question_marks):
    return "Hey {dude_name}! Want a {food_today} today{q_marks}".format(
        dude_name=dude_name,
        food_today=food_today,
        q_marks='?'*num_question_marks)

The dependent code in this project could then pass in the required config value when it calls greet_dude_with_food, like so:

foodgreeter.py:

from foodutils import greet_dude_with_food
from mysettings import NUM_QUESTION_MARKS

dude_name = 'Johnny'
food_today = 'lasagna'
print(greet_dude_with_food(
    dude_name=dude_name,
    food_today=food_today,
    num_question_marks=NUM_QUESTION_MARKS))

Once the code we're re-factoring no longer depends on anything elsewhere in the codebase, it's ready to be made an external dependency.

New repo for dependency

Next comes the step of physically moving the given chunk of code out of the project's codebase. In most cases, this means deleting the given file(s) from the project's version control repository (you are using version control, right?), and creating a new repo for those file(s) to live in.

For example, if you're using Git, the steps would be something like this:

mkdir /path/to/foodutils
cd /path/to/foodutils
git init .

mv /path/to/lasagnafest/project/foodutils.py .
git add .
git commit -m "Initial commit"

cd /path/to/lasagnafest
git rm project/foodutils.py
git commit -m "Moved foodutils to external dependency"

Add some metadata

The given chunk of code now has its own dedicated repo. But it's not yet a project, in its own right, and it can't yet be referenced as a dependency. To do that, we'll need to add some more files to the new repo, mainly consisting of metadata describing "who" this project is, and what it does.

First up, add a .gitignore file – I recommend the default Python .gitignore on GitHub. Feel free to customise as needed.

Next, add a version number to the code. The best way to do this, is to add it at the top of the main Python file, e.g. by adding this to the top of foodutils.py:

__version__ = '0.1.0'

After that, we're going to add the standard metadata files that almost all open-source Python projects have. Most importantly, a setup.py file that looks something like this:

import os

import setuptools

module_path = os.path.join(os.path.dirname(__file__), 'foodutils.py')
version_line = [line for line in open(module_path)
                if line.startswith('__version__')][0]

__version__ = version_line.split('__version__ = ')[-1][1:][:-2]

setuptools.setup(
    name="foodutils",
    version=__version__,
    url="https://github.com/misterfoo/foodutils",

    author="Mister foo",
    author_email="mister@foo.com",

    description="Utils for handling food.",
    long_description=open('README.rst').read(),

    py_modules=['foodutils'],
    zip_safe=False,
    platforms='any',

    install_requires=[],

    classifiers=[
        'Development Status :: 2 - Pre-Alpha',
        'Environment :: Web Environment',
        'Intended Audience :: Developers',
        'Operating System :: OS Independent',
        'Programming Language :: Python',
        'Programming Language :: Python :: 2',
        'Programming Language :: Python :: 2.7',
        'Programming Language :: Python :: 3',
        'Programming Language :: Python :: 3.3',
    ],
)

And also, a README.rst file:

foodutils
=========

Utils for handling food.

Once you've created those files, commit them to the new repo.

Push the repo

Great – the chunk of code now lives in its own repo, and it contains enough metadata for other projects to see what its name is, what version(s) of it there are, and what function(s) it performs. All that needs to be done now, is to decide where this repo will be hosted. But to do this, you first need to answer an important non-technical question: to open-source the code, or to keep it proprietary?

In general, you should open-source your dependencies whenever possible. You get more eyeballs (for free). Famous hairy people like Richard Stallman will send you flowers. If nothing else, you'll at least be able to always easily find your code, guaranteed (if you can't remember where it is, just Google it!). You get the drift. If open-sourcing the code, then the most obvious choice for where to host the repo is GitHub. (However, I'm not evangelising GitHub here, remember there are other options, kids).

Open source is kool, but sometimes you can't or you don't want to go down that route. That's fine, too – I'm not here to judge anyone, and I can't possibly be aware of anyone else's business / ownership / philosophical situation. So, if you want to keep the code all to your little self (or all to your little / big company's self), you're still going to have to host it somewhere. And no, "on my laptop" does not count as your code being hosted somewhere (well, technically you could just keep the repo on your own PC, and still reference it as a dependency, but that's a Bad Idea™). There are a number of hosting options: for example, on a VPS that you control; or using a managed service such as GitHub private, Bitbucket, or Assembla (note: once again, not promoting any specific service provider, just listing the main players as options).

So, once you've decided whether or not to open-source the code, and once you've settled on a hosting option, push the new repo to its hosted location.

Upload to PyPI

Nearly there now. The chunk of code has been de-coupled from its dependent project; it's been put in a new repo with the necessary metadata; and that repo is now hosted at a permanent location somewhere online. All that's left, is to make it known to the universe of Python projects, so that it can be easily listed as a dependency of other Python projects.

If you've developed with Python before (and if you've read this far, then I assume you have), then no doubt you've heard of pip. Being the Python package manager of choice these days, pip is the tool used to manage Python dependencies. pip can find dependencies from a variety of locations, but the place it looks first and foremost (by default) is on the Python Package Index (PyPI).

If your dependency is public and open-source, then you should add it to PyPI. Each time you release a new version, then (along with committing and tagging that new version in the repo) you should also upload it to PyPI. I won't go into the details in this article; please refer to the official docs for registering and uploading packages on PyPI. When following the instructions there, you'll generally want to package your code as a "universal wheel", you'll generally use the PyPI website form to register a new package, and you'll generally use twine to upload the package.

If your dependency is private and proprietary, then PyPI is not an option. The easiest way to deal with private dependencies (also the easiest way to deal with public dependencies, for that matter), is to not worry about proper Python packaging at all, and simply to use pip's ability to directly reference a source repo (including a specific commit / tag), e.g:

pip install -e \
git+http://git.myserver.com/foodutils.git@0.1.0#egg=foodutils

However, that has a number of disadvantages, the most visible disadvantage being that pip install will run much slower, because it has to do a git pull every time you ask it to check that foodutils is installed (even if you specify the same commit / tag each time).

A better way to deal with private dependencies, is to create your own "private PyPI". Same as with public packages: each time you release a new version, then (along with committing and tagging that new version in the repo) you should also upload it to your private PyPI. For instructions regarding this, please refer to my guide for how to set up and use a private PyPI repo. Also, note that my guide is for quite a minimal setup, although it contains links to some alternative setup options, including more advanced and full-featured options. (And if using a private PyPI, then take note of my guide's instructions for what to put in your local ~/.pip/pip.conf file).

Reference the dependency

The chunk of code is now ready to be used as an external dependency, by any project. To do this, you simply list the package in your project's requirements.txt file; whether the package is on the public PyPI, or on a private PyPI of your own, the syntax is the same:

foodutils==0.1.0 # From pypi.myserver.com

Then, just run your dependencies through pip as usual:

pip install -r requirements.txt

And there you have it: foodutils is now an external dependency. You can list it as a requirement for LasagnaFest, TortelliniFest, GnocchiFest, and as many other projects as you need.

Final thoughts

This article was born out of a series of projects that I've been working on over the past few months (and that I'm still working on), written mainly in Flask (these apps are still in alpha; ergo, sorry, can't talk about their details yet). The size of the projects' codebases grew to be rather unwieldy, and the projects have quite a lot of shared functionality.

I started out by re-using chunks of code between the different projects, with the hacky solution of sym-linking from one codebase to another. This quickly became unmanageable. Once I could stand the symlinks no longer (and once I had some time for clean-up), I moved these shared chunks of code into separate repos, and referenced them as dependencies (with some being open-sourced and put on the public PyPI). Only in the last week or so, after losing patience with slow pip installs, and after getting sick of seeing far too many -e git+http://git… strings in my requirements.txt files, did I finally get around to setting up a private PyPI, for better dealing with the proprietary dependencies of these codebases.

I hope that this article provides some clear guidance regarding what can be quite a confusing task, i.e. that of creating and maintaining a private Python package index. Aside from being a technical guide, though, my aim in penning this piece is to explain how you can split off component parts of a monolithic codebase into re-usable, independent separate codebases; and to convey the advantages of doing so, in terms of code quality and maintainability.

Flask, my framework of choice these days, strives to consist of a series of independent projects (Flask, Werkzeug, Jinja, WTForms, and the myriad Flask-* add-ons), which are compatible with each other, but which are also useful stand-alone or with other systems. I think that this is a great example for everyone to follow, even humble "custom web-app" developers like myself. Bearing that in mind, devoting some time to splitting code out of a big bad client-project codebase, and creating more atomic packages (even if not open-source) upon whose shoulders a client-project can stand, is a worthwhile endeavour.

]]>
Generating a Postgres DB dump of a filtered relational set https://greenash.net.au/thoughts/2015/06/generating-a-postgres-db-dump-of-a-filtered-relational-set/ Mon, 22 Jun 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/06/generating-a-postgres-db-dump-of-a-filtered-relational-set/ PostgreSQL is my favourite RDBMS, and it's the fave of many others too. And rightly so: it's a good database! Nevertheless, nobody's perfect.

When it comes to exporting Postgres data (as SQL INSERT statements, at least), the tool of choice is the standard pg_dump utility. Good ol' pg_dump is rock solid but, unfortunately, it doesn't allow for any row-level filtering. Turns out that, for a recent project of mine, a filtered SQL dump is exactly what the client ordered.

On account of this shortcoming, I spent some time whipping up a lil' Python script to take care of this functionality. I've converted the original code (written for a client-specific data set) to a more generic example script, which I've put up on GitHub under the name "PG Dump Filtered". If you're just after the code, then feel free to head over to the repo without further ado. If you'd like to stick around for the tour, then read on.

Worlds apart

For the example script, I've set up a simple schema of four entities: worlds, countries, cities, and people. This schema happens to be purely hierarchical (i.e. each world has zero or more countries, each country has zero or more cities, and each city has zero or more people), for the sake of simplicity; but the script could be adapted to any valid set of foreign-key based relationships.

CREATE TABLE world (
    name character varying(255) NOT NULL,
    created_at timestamp without time zone,
    updated_at timestamp without time zone,
    active boolean NOT NULL,
    uuid bytea,
    id integer NOT NULL
);

ALTER TABLE ONLY world
    ADD CONSTRAINT world_pkey PRIMARY KEY (id);

CREATE TABLE country (
    name character varying(255) NOT NULL,
    created_at timestamp without time zone,
    updated_at timestamp without time zone,
    active boolean NOT NULL,
    uuid bytea,
    id integer NOT NULL,
    world_id integer,
    bigness numeric(10,2)
);

ALTER TABLE ONLY country
    ADD CONSTRAINT country_pkey PRIMARY KEY (id);
ALTER TABLE ONLY country
    ADD CONSTRAINT country_world_id_fkey FOREIGN KEY (world_id)
    REFERENCES world(id);

CREATE TABLE city (
    name character varying(255) NOT NULL,
    created_at timestamp without time zone,
    updated_at timestamp without time zone,
    active boolean NOT NULL,
    uuid bytea,
    id integer NOT NULL,
    country_id integer,
    weight integer,
    is_big boolean DEFAULT false NOT NULL,
    pseudonym character varying(255) DEFAULT ''::character varying
        NOT NULL,
    description text DEFAULT ''::text NOT NULL
);

ALTER TABLE ONLY city
    ADD CONSTRAINT city_pkey PRIMARY KEY (id);
ALTER TABLE ONLY city
    ADD CONSTRAINT city_country_id_fkey FOREIGN KEY (country_id)
    REFERENCES country(id);

CREATE TABLE person (
    name character varying(255) NOT NULL,
    created_at timestamp without time zone,
    updated_at timestamp without time zone,
    active boolean NOT NULL,
    uuid bytea,
    id integer NOT NULL,
    city_id integer,
    person_type character varying(255) NOT NULL
);

ALTER TABLE ONLY person
    ADD CONSTRAINT person_pkey PRIMARY KEY (id);
ALTER TABLE ONLY person
    ADD CONSTRAINT person_city_id_fkey FOREIGN KEY (city_id)
    REFERENCES city(id);

Using this schema, data belonging to two different worlds can co-exist in the same database. For example, we can have data for the world "Krypton" co-exist with data for the world "Romulus":

INSERT INTO world (name, created_at, updated_at, active, uuid, id)
VALUES ('Krypton', '2015-06-01 09:00:00.000000',
        '2015-06-06 09:00:00.000000', true,
        '\x478a43577ebe4b07ba8631ca228ee42a', 1);
INSERT INTO world (name, created_at, updated_at, active, uuid, id)
VALUES ('Romulus', '2015-06-01 10:00:00.000000',
        '2015-06-05 13:00:00.000000', true,
        '\x82e2c0ac3ba84a34a1ad3bbbb2063547', 2);

INSERT INTO country (name, created_at, updated_at, active, uuid, id,
                     world_id, bigness)
VALUES ('Crystalland', '2015-06-02 09:00:00.000000',
        '2015-06-08 09:00:00.000000', true,
        '\xcd0338cf2e3b40c3a3751b556a237152', 1, 1, 3.86);
INSERT INTO country (name, created_at, updated_at, active, uuid, id,
                     world_id, bigness)
VALUES ('Greenbloodland', '2015-06-03 11:00:00.000000',
        '2015-06-07 13:00:00.000000', true,
        '\x17591321d1634bcf986d0966a539c970', 2, 2, NULL);

INSERT INTO city (name, created_at, updated_at, active, uuid, id,
                  country_id, weight, is_big, pseudonym, description)
VALUES ('Kryptonopolis', '2015-06-05 09:00:00.000000',
        '2015-06-11 09:00:00.000000', true,
        '\x13659f9301d24ea4ae9c534d70285edc', 1, 1, 100, true,
        'Pointyville',
        'Nice place, once you get used to the pointiness.');

INSERT INTO city (name, created_at, updated_at, active, uuid, id,
                  country_id, weight, is_big, pseudonym, description)
VALUES ('Rom City', '2015-06-04 09:00:00.000000',
        '2015-06-13 09:00:00.000000', true,
        '\xc45a9fb0a92a43df91791b11d65f5096', 2, 2, 200, false,
        '',
        'Gakkkhhhh!');

INSERT INTO person (name, created_at, updated_at, active, uuid,
                    city_id, person_type)
VALUES ('Superman', '2015-06-14 09:00:00.000000',
        '2015-06-15 22:00:00.000000', true,
        '\xbadd1ca153994deca0f78a5158215cf6', 1,
        'Awesome Heroic Champ');
INSERT INTO person (name, created_at, updated_at, active, uuid,
                    city_id, person_type)
VALUES ('General Zod', '2015-06-14 10:00:00.000000',
        '2015-06-15 23:00:00.000000', true,
        '\x796031428b0a46c2a9391eb5dc45008a', 1,
        'Bad Bloke');

INSERT INTO person (name, created_at, updated_at, active, uuid,
                    city_id, person_type)
VALUES ('Mister Funnyears', '2015-06-14 11:00:00.000000',
        '2015-06-15 22:30:00.000000', false,
        '\x22380f6dc82d47f488a58153215864cb', 2,
        'Mediocre Dude');
INSERT INTO person (name, created_at, updated_at, active, uuid,
                    city_id, person_type)
VALUES ('Captain Greeny', '2015-06-15 05:00:00.000000',
        '2015-06-16 08:30:00.000000', true,
        '\x485e31758528425dbabc598caaf86fa4', 2,
        'Weirdo');

In this case, our two key stakeholders – the Kryptonians and the Romulans – have been good enough to agree to their respective data records being stored in the same physical database. After all, they're both storing the same type of data, and they accept the benefits of a shared schema in terms of cost-effectiveness, maintainability, and scalability.

However, these two stakeholders aren't exactly the best of friends. In fact, they're not even on speaking terms (have you even seen them both feature in the same franchise, let alone the same movie?). Plus, for legal reasons (and in the interests of intergalactic peace), there can be no possibility of Kryptonian records falling into Romulan hands, or vice versa. So, it really is critical that, as far as these two groups are concerned, the data appears to be completely partitioned.

(It's also lucky that we're using Postgres and Python, which all parties appear to be cool with – the Klingons are mad about Node.js and MongoDB these days, so the Romulans would never have come on board if we'd gone down that path…).

Fortunately, thanks to the wondrous script that's now been written, these unlikely DB room-mates can have their dilithium and eat it, too. The Romulans, for example, can simply specify their World ID of 2:

./pg_dump_filtered.py \
"postgresql://pg_dump_test:pg_dump_test@localhost:5432/pg_dump_test" 2 \
> ~/pg_dump_test_output.sql

And they'll get a DB dump of what is (as far as they're concerned) … well, the whole world! (Note: please do not change your dietary habits per above innuendo, dilithium can harm your unborn baby).

And all thanks to a lil' bit of Python / SQL trickery, to filter things according to their world:

# ...

# Thanks to:
# http://bytes.com/topic/python/answers/438133-find-out-schema-psycopg
t_cur.execute((
    "SELECT        column_name "
    "FROM          information_schema.columns "
    "WHERE         table_name = '%s' "
    "ORDER BY      ordinal_position") % table)

t_fields_str = ', '.join([x[0] for x in t_cur])
d_cur = conn.cursor()

# Start constructing the query to grab the data for dumping.
query = (
    "SELECT        x.* "
    "FROM          %s x ") % table

# The rest of the query depends on which table we're at.
if table == 'world':
    query += "WHERE         x.id = %(world_id)s "
elif table == 'country':
    query += "WHERE         x.world_id = %(world_id)s "
elif table == 'city':
    query += (
        "INNER JOIN    country c "
        "ON            x.country_id = c.id ")
    query += "WHERE         c.world_id = %(world_id)s "
elif table == 'person':
    query += (
        "INNER JOIN    city ci "
        "ON            x.city_id = ci.id "
        "INNER JOIN    country c "
        "ON            ci.country_id = c.id ")
    query += "WHERE         c.world_id = %(world_id)s "

# For all tables, filter by the top-level ID.
d_cur.execute(query, {'world_id': world_id})

With a bit more trickery thrown in for good measure, to more-or-less emulate pg_dump's export of values for different data types:

# ...

# Start constructing the INSERT statement to dump.
d_str = "INSERT INTO %s (%s) VALUES (" % (table, t_fields_str)
d_vals = []

for i, d_field in enumerate(d_row):
    d_type = type(d_field).__name__

    # Rest of the INSERT statement depends on the type of
    # each field.
    if d_type == 'datetime':
        d_vals.append("'%s'" % d_field.isoformat().replace('T', ' '))
    elif d_type == 'bool':
        d_vals.append('%s' % (d_field and 'true' or 'false'))
    elif d_type == 'buffer':
        d_vals.append(r"'\x" + ("%s'" % hexlify(d_field)))
    elif d_type == 'int':
        d_vals.append('%d' % d_field)
    elif d_type == 'Decimal':
        d_vals.append('%f' % d_field)
    elif d_type in ('str', 'unicode'):
        d_vals.append("'%s'" % d_field.replace("'", "''"))
    elif d_type == 'NoneType':
        d_vals.append('NULL')

d_str += ', '.join(d_vals)
d_str += ');'

(Above code samples from: pg_dump_filtered.py).

And that's the easy part done! Now, on to working out how to efficiently do Postgres master-slave replication over a distance of several thousand light years, without disrupting the space-time continuum.

(livelong AND prosper);

Hope my little example script comes in handy, for anyone else needing a version of pg_dump that can do arbitrary filtering on inter-related tables. As I said in the README, with only a small amount of tweaking, this script should be able to produce a dump of virtually any relational data set, filtered by virtually any criteria that you might fancy.

Also, this script is for Postgres: the pg_dump utility lacks any query-level filtering functionality, so using it in this way is simply not an option. The script could also be quite easily adapted to other DBMSes (e.g. MySQL, SQL Server, Oracle), although most of Postgres' competitors have a dump utility with at least some filtering capability.

]]>
Storing Flask uploaded images and files on Amazon S3 https://greenash.net.au/thoughts/2015/04/storing-flask-uploaded-images-and-files-on-amazon-s3/ Mon, 20 Apr 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/04/storing-flask-uploaded-images-and-files-on-amazon-s3/ Flask is still a relative newcomer in the world of Python frameworks (it recently celebrated its fifth birthday); and because of this, it's still sometimes trailing behind its rivals in terms of plugins to scratch a given itch. I recently discovered that this was the case, with storing and retrieving user-uploaded files on Amazon S3.

For static files (i.e. an app's seldom-changing CSS, JS, and images), Flask-Assets and Flask-S3 work together like a charm. For more dynamic files, there exist numerous snippets of solutions, but I couldn't find anything to fill in all the gaps and tie it together nicely.

Due to a pressing itch in one of my projects, I decided to rectify this situation somewhat. Over the past few weeks, I've whipped up a bunch of Python / Flask tidbits, to handle the features that I needed:

I've also published an example app, that demonstrates how all these tools can be used together. Feel free to dive straight into the example code on GitHub; or read on for a step-by-step guide of how this Flask S3 tool suite works.

Using s3-saver

The key feature across most of this tool suite, is being able to use the same code for working with local and with S3-based files. Just change a single config option, or a single function argument, to switch from one to the other. This is critical to the way I need to work with files in my Flask projects: on my development environment, everything should be on the local filesystem; but on other environments (especially production), everything should be on S3. Others may have the same business requirements (in which case you're in luck). This is most evident with s3-saver.

Here's a sample of the typical code you might use, when working with s3-saver:

from io import BytesIO
from os import path

from flask import current_app as app
from flask import Blueprint
from flask import flash
from flask import redirect
from flask import render_template
from flask import url_for
from s3_saver import S3Saver

from project import db
from library.prefix_file_utcnow import prefix_file_utcnow
from foo.forms import ThingySaveForm
from foo.models import Thingy


mod = Blueprint('foo', __name__)


@mod.route('/', methods=['GET', 'POST'])
def home():
    """Displays the Flask S3 Save Example home page."""

    model = Thingy.query.first() or Thingy()

    form = ThingySaveForm(obj=model)

    if form.validate_on_submit():
        image_orig = model.image
        image_storage_type_orig = model.image_storage_type
        image_bucket_name_orig = model.image_storage_bucket_name

        # Initialise s3-saver.
        image_saver = S3Saver(
            storage_type=app.config['USE_S3'] and 's3' or None,
            bucket_name=app.config['S3_BUCKET_NAME'],
            access_key_id=app.config['AWS_ACCESS_KEY_ID'],
            access_key_secret=app.config['AWS_SECRET_ACCESS_KEY'],
            field_name='image',
            storage_type_field='image_storage_type',
            bucket_name_field='image_storage_bucket_name',
            base_path=app.config['UPLOADS_FOLDER'],
            static_root_parent=path.abspath(
                path.join(app.config['PROJECT_ROOT'], '..')))

        form.populate_obj(model)

        if form.image.data:
            filename = prefix_file_utcnow(model, form.image.data)

            filepath = path.abspath(
                path.join(
                    path.join(
                        app.config['UPLOADS_FOLDER'],
                        app.config['THINGY_IMAGE_RELATIVE_PATH']),
                    filename))

            # Best to pass in a BytesIO to S3Saver, containing the
            # contents of the file to save. A file from any source
            # (e.g. in a Flask form submission, a
            # werkzeug.datastructures.FileStorage object; or if
            # reading in a local file in a shell script, perhaps a
            # Python file object) can be easily converted to BytesIO.
            # This way, S3Saver isn't coupled to a Werkzeug POST
            # request or to anything else. It just wants the file.
            temp_file = BytesIO()
            form.image.data.save(temp_file)

            # Save the file. Depending on how S3Saver was initialised,
            # could get saved to local filesystem or to S3.
            image_saver.save(
                temp_file,
                app.config['THINGY_IMAGE_RELATIVE_PATH'] + filename,
                model)

            # If updating an existing image,
            # delete old original and thumbnails.
            if image_orig:
                if image_orig != model.image:
                    filepath = path.join(
                        app.config['UPLOADS_FOLDER'],
                        image_orig)

                    image_saver.delete(filepath,
                        storage_type=image_storage_type_orig,
                        bucket_name=image_bucket_name_orig)

                glob_filepath_split = path.splitext(path.join(
                    app.config['MEDIA_THUMBNAIL_FOLDER'],
                    image_orig))
                glob_filepath = glob_filepath_split[0]
                glob_matches = image_saver.find_by_path(
                    glob_filepath,
                    storage_type=image_storage_type_orig,
                    bucket_name=image_bucket_name_orig)

                for filepath in glob_matches:
                    image_saver.delete(
                        filepath,
                        storage_type=image_storage_type_orig,
                        bucket_name=image_bucket_name_orig)
        else:
            model.image = image_orig

        # Handle image deletion
        if form.image_delete.data and image_orig:
            filepath = path.join(
                app.config['UPLOADS_FOLDER'], image_orig)

            # Delete the file. In this case, we have to pass in
            # arguments specifying whether to delete locally or on
            # S3, as this should depend on where the file was
            # originally saved, rather than on how S3Saver was
            # initialised.
            image_saver.delete(filepath,
                storage_type=image_storage_type_orig,
                bucket_name=image_bucket_name_orig)

            # Also delete thumbnails
            glob_filepath_split = path.splitext(path.join(
                app.config['MEDIA_THUMBNAIL_FOLDER'],
                image_orig))
            glob_filepath = glob_filepath_split[0]

            # S3Saver can search for files too. When searching locally,
            # it uses glob(); when searching on S3, it uses key
            # prefixes.
            glob_matches = image_saver.find_by_path(
                glob_filepath,
                storage_type=image_storage_type_orig,
                bucket_name=image_bucket_name_orig)

            for filepath in glob_matches:
                image_saver.delete(filepath,
                                   storage_type=image_storage_type_orig,
                                   bucket_name=image_bucket_name_orig)

            model.image = ''
            model.image_storage_type = ''
            model.image_storage_bucket_name = ''

        if form.image.data or form.image_delete.data:
            db.session.add(model)
            db.session.commit()
            flash('Thingy %s' % (
                      form.image_delete.data and 'deleted' or 'saved'),
                  'success')
        else:
            flash(
                'Please upload a new thingy or delete the ' +
                    'existing thingy',
                'warning')

        return redirect(url_for('foo.home'))

    return render_template('home.html',
                           form=form,
                           model=model)

(From: https://github.com/Jaza/flask-s3-save-example/blob/master/project/foo/views.py).

As is hopefully evident in the sample code above, the idea with s3-saver is that as little S3-specific code as possible is needed, when performing operations on a file. Just find, save, and delete files as usual, per the user's input, without worrying about the details of that file's storage back-end.

s3-saver uses the excellent Python boto library, as well as Python's built-in file handling functions, so that you don't have to. As you can see in the sample code, you don't need to directly import either boto, or the file-handling functions such as glob or os.remove. All you need to import is io.BytesIO, and os.path, in order to be able to pass s3-saver the parameters that it needs.

Using url-for-s3

This is a simple utility function, that generates a URL to a given S3-based file. It's designed to match flask.url_for as closely as possible, so that one can be swapped out for the other with minimal fuss.

from __future__ import print_function

from flask import url_for
from url_for_s3 import url_for_s3

from project import db


class Thingy(db.Model):
    """Sample model for flask-s3-save-example."""

    id = db.Column(db.Integer(), primary_key=True)
    image = db.Column(db.String(255), default='')
    image_storage_type = db.Column(db.String(255), default='')
    image_storage_bucket_name = db.Column(db.String(255), default='')

    def __repr__(self):
        return 'A thingy'

    @property
    def image_url(self):
        from flask import current_app as app
        return (self.image
            and '%s%s' % (
                app.config['UPLOADS_RELATIVE_PATH'],
                self.image)
            or None)

    @property
    def image_url_storageaware(self):
        if not self.image:
            return None

        if not (
                self.image_storage_type
                and self.image_storage_bucket_name):
            return url_for(
                'static',
                filename=self.image_url,
                _external=True)

        if self.image_storage_type != 's3':
            raise ValueError((
                'Storage type "%s" is invalid, the only supported ' +
                'storage type (apart from default local storage) ' +
                'is s3.') % self.image_storage_type)

        return url_for_s3(
            'static',
            bucket_name=self.image_storage_bucket_name,
            filename=self.image_url)

(From: https://github.com/Jaza/flask-s3-save-example/blob/master/project/foo/models.py).

The above sample code illustrates how I typically use url_for_s3. For a given instance of a model, if that model's file is stored locally, then generate its URL using flask.url_for; otherwise, switch to url_for_s3. Only one extra parameter is needed: the S3 bucket name.

  {% if model.image %}
  <p><a href="{{ model.image_url_storageaware }}">View original</a></p>
  {% endif %}

(From: https://github.com/Jaza/flask-s3-save-example/blob/master/templates/home.html).

I can then easily show the "storage-aware URL" for this model in my front-end templates.

Using flask-thumbnails-s3

In my use case, the majority of the files being uploaded are images, and most of those images need to be resized when displayed in the front-end. Also, ideally, the dimensions for resizing shouldn't have to be pre-specified (i.e. thumbnails shouldn't only be able to get generated when the original image is first uploaded); new thumbnails of any size should get generated on-demand per the templates' needs. The front-end may change according to the design / branding whims of clients and other stakeholders, further on down the road.

flask-thumbnails handles just this workflow for local files; so, I decided to fork it and to create flask-thumbnails-s3, which works the same as flask-thumbnails when set to use local files, but which can also store and retrieve thumbnails on a S3 bucket.

    {% if image %}
    <div>
    <img src="{{ image|thumbnail(size,
                                 crop=crop,
                                 quality=quality,
                                 storage_type=storage_type,
                                 bucket_name=bucket_name) }}"
        alt="{{ alt }}" title="{{ title }}" />
    </div>
    {% endif %}

(From: https://github.com/Jaza/flask-s3-save-example/blob/master/templates/macros/imagethumb.html).

Like its parent project, flask-thumbnails-s3 is most commonly invoked by way of a template filter. If a thumbnail of the given original file exists, with the specified size and attributes, then it's returned straightaway; if not, then the original file is retrieved, a thumbnail is generated, and the thumbnail is saved to the specified storage back-end.

At the moment, flask-thumbnails-s3 blocks the running thread while it generates a thumbnail and saves it to S3. Ideally, this task would get sent to a queue, and a "dummy" thumbnail would be returned in the immediate request, until the "real" thumbnail is ready in a later request. The Sorlery plugin for Django uses the queued approach. It would be cool if flask-thumbnails-s3 (optionally) did the same. Anyway, it works without this fanciness for now; extra contributions welcome!

(By the way, in my testing, this is much less of a problem if your Flask app is deployed on an Amazon EC2 box, particularly if it's in the same region as your S3 bucket; unsurprisingly, there appears to be much less latency between an EC2 server and S3, than there is between a non-Amazon server and S3).

Using flask-admin-s3-upload

The purpose of flask-admin-s3-upload is basically to provide the same 'save' functionality as s3-saver, but automatically within Flask-Admin. It does this by providing alternatives to the flask_admin.form.upload.FileUploadField and flask_admin.form.upload.ImageUploadField classes, namely flask_admin_s3_upload.S3FileUploadField and flask_admin_s3_upload.S3ImageUploadField.

(Anecdote: I actually wrote flask-admin-s3-upload before any of the other tools in this suite, because I began by working with a part of my project that has no custom front-end, only a Flask-Admin based management console).

Using the utilities provided by flask-admin-s3-upload is fairly simple:

from os import path

from flask_admin_s3_upload import S3ImageUploadField

from project import admin, app, db
from foo.models import Thingy
from library.admin_utils import ProtectedModelView
from library.prefix_file_utcnow import prefix_file_utcnow


class ThingyView(ProtectedModelView):
    column_list = ('image',)
    form_excluded_columns = ('image_storage_type',
                             'image_storage_bucket_name')

    form_overrides = dict(
        image=S3ImageUploadField)

    form_args = dict(
        image=dict(
            base_path=app.config['UPLOADS_FOLDER'],
            relative_path=app.config['THINGY_IMAGE_RELATIVE_PATH'],
            url_relative_path=app.config['UPLOADS_RELATIVE_PATH'],
            namegen=prefix_file_utcnow,
            storage_type_field='image_storage_type',
            bucket_name_field='image_storage_bucket_name',
        ))

    def scaffold_form(self):
        form_class = super(ThingyView, self).scaffold_form()
        static_root_parent = path.abspath(
            path.join(app.config['PROJECT_ROOT'], '..'))

        if app.config['USE_S3']:
            form_class.image.kwargs['storage_type'] = 's3'

        form_class.image.kwargs['bucket_name'] = \
            app.config['S3_BUCKET_NAME']
        form_class.image.kwargs['access_key_id'] = \
            app.config['AWS_ACCESS_KEY_ID']
        form_class.image.kwargs['access_key_secret'] = \
            app.config['AWS_SECRET_ACCESS_KEY']
        form_class.image.kwargs['static_root_parent'] = \
            static_root_parent

        return form_class


admin.add_view(ThingyView(Thingy, db.session, name='Thingies'))

(From: https://github.com/Jaza/flask-s3-save-example/blob/master/project/foo/admin.py).

Note that flask-admin-s3-upload only handles saving, not deleting (the same as the regular Flask-Admin file / image upload fields only handle saving). If you wanted to handle deleting files in the admin as well, you could (for example) use s3-saver, and hook it in to one of the Flask-Admin event callbacks.

In summary

I'd also like to mention: one thing that others have implemented in Flask, is direct JavaScript-based upload to S3. Implementing this sort of functionality in my tool suite would be a great next step; however, it would have to play nice with everything else I've built (particularly with flask-thumbnails-s3), and it would have to work for local- and for S3-based files, the same as all the other tools do. I don't have time to address those hurdles right now – another area where contributions are welcome.

I hope that this article serves as a comprehensive guide, of how to use the Flask S3 tools that I've recently built and contributed to the community. Any questions or concerns, please drop me a line.

]]>
Five long-distance and long-way-off Australian infrastructure links https://greenash.net.au/thoughts/2015/01/five-long-distance-and-long-way-off-australian-infrastructure-links/ Sat, 03 Jan 2015 00:00:00 +0000 https://greenash.net.au/thoughts/2015/01/five-long-distance-and-long-way-off-australian-infrastructure-links/ Australia. It's a big place. With only a handful of heavily populated areas. And a whole lot of nothing in between.

No, really. Nothing.
No, really. Nothing.
Image source: Australian Outback Buffalo Safaris.

Over the past century or so, much has been achieved in combating the famous Tyranny of Distance that naturally afflicts this land. High-quality road, rail, and air links now traverse the length and breadth of Oz, making journeys between most of her far-flung corners relatively easy.

Nevertheless, there remain a few key missing pieces, in the grand puzzle of a modern, well-connected Australian infrastructure system. This article presents five such missing pieces, that I personally would like to see built in my lifetime. Some of these are already in their early stages of development, while others are pure fantasies that may not even be possible with today's technology and engineering. All of them, however, would provide a new long-distance connection between regions of Australia, where there is presently only an inferior connection in place, or none at all.

Tunnel to Tasmania

Let me begin with the most nut-brained idea of all: a tunnel from Victoria to Tasmania!

As the sole major region of Australia that's not on the continental landmass, currently the only options for reaching Tasmania are by sea or by air. The idea of a tunnel (or bridge) to Tasmania is not new, it has been sporadically postulated for over a century (although never all that seriously). There's a long and colourful discussion of routes, cost estimates, and geographical hurdles at the Bass Strait Tunnel thread on Railpage. There's even a Facebook page promoting a Tassie Tunnel.

Artist's impression of a possible Tassie Tunnel (note: may not actually be artistic).
Artist's impression of a possible Tassie Tunnel (note: may not actually be artistic).
Image sources: Wikimedia Commons: Light on door at the end of tunnel; Wikimedia Commons: Tunnel icon2; satellite imagery courtesy of Google Earth.

Although it would be a highly beneficial piece of infrastructure, that would in the long-term (among other things) provide a welcome boost to Tasmania's (and Australia's) economy, sadly the Tassie Tunnel is probably never going to happen. The world's longest undersea tunnel to date (under the Tsugaru Strait in Japan) spans only 54km. A tunnel under the Bass Strait, directly from Victoria to Tasmania, would be at least 200km long; although if it went via King Island (to the northwest of Tas), it could be done as two tunnels, each one just under 100km. Both the length and the depth of such a tunnel make it beyond the limits of contemporary engineering.

Aside from the engineering hurdle – and of course the monumental cost – it also turns out that the Bass Strait is Australia's main seismic hotspot (just our luck, what with the rest of Australia being seismically dead as a doornail). The area hasn't seen any significant undersea volcanic activity in the past few centuries, but experts warn that it could start letting off steam in the near future. This makes it hardly an ideal area for building a colossal tunnel.

Railway from Mt Isa to Tennant Creek

Great strides have been made in connecting almost all the major population centres of Australia by rail. The first significant long-distance rail link in Oz was the line from Sydney to Melbourne, which was completed in 1883 (although a change-of-gauge was required until 1962). The Indian Pacific (Sydney to Perth), a spectacular trans-continental achievement and the nation's longest train line – not to mention one of the great railways of the world – is the real backbone on the map, and has been operational since 1970. The newest and most long-awaited addition, The Ghan (Adelaide to Darwin), opened for business in 2004.

The Ghan roaring through the outback.
The Ghan roaring through the outback.
Image source: Fly With Me.

Today's nation-wide rail network (with regular passenger service) is, therefore, at an impressive all-time high. Every state and territory capital is connected (except for Hobart – a Tassie Tunnel would fix that!), and numerous regional centres are in the mix too. Despite the fact that many of the lines / trains are old and clunky, they continue (often stubbornly) to plod along.

If you look at the map, however, you might notice one particularly glaring gap in the network, particularly now that The Ghan has been established. And that is between Mt Isa in Queensland (the terminus of The Inlander service from Townsville), and Tennant Creek in the Northern Territory (which The Ghan passes through). At the moment, travelling continuously by rail from Townsville to Darwin would involve a colossal horse-shoe journey via Sydney and Adelaide, which only an utter nutter would consider embarking upon. Whereas with the addition of this relatively small (1,000km or so) extra line, the journey would be much shorter, and perfectly feasible. Although still long; there's no silver bullet through the outback.

A railway from Mt Isa to Tennant Creek – even though it would traverse some of the most remote and desolate land in Australia – is not a pipe dream. It's been suggested several times over the past few years. As with the development of the Townsville to Mt Isa railway a century ago, it will need the investment of the mining industry in order to actually happen. Unfortunately, the current economic situation means that mining companies are unlikely to invest in such a project at this time; what's more, The Inlander is a seriously decrepit service (at risk of being decommissioned) on an ageing line, making it somewhat unsuitable for joining up with a more modern line to the west.

Nonetheless, I have high hopes that we will see this railway connection built in the not-too-distant future, when the stars are next aligned.

Highway to Cape York

Australia's northernmost region, the Cape York Peninsula, is also one of the country's last truly wild frontiers. There is now a sealed all-weather highway all the way around the Australian mainland, and there's good or average road access to the key towns in almost all regional areas. Cape York is the only place left in Oz that lacks such roads, and that's also home to a non-trivial population (albeit a small 20,000-ish people, the majority Aborigines, in an area half the size of Victoria). Other areas in Oz with no road access whatsoever, such as south-west Tasmania, and most of the east of Western Australia, are lacking even a trivial population.

The biggest challenge to reliable transport in the Cape is the wet season: between December and April, there's so much rainfall that all the rivers become flooded, making roads everywhere impassable. Aside from that, the Cape also presents other obstacles, such as being seriously infested with crocodiles.

There are two main roads that provide access to the Cape: the Peninsula Developmental Road (PDR) from Lakeland to Weipa, and the Northern Peninsula Road (NPR), from the junction north of Coen on to Bamaga. The PDR is slowly improving, but the majority of it is still unsealed and is closed for much of the wet season. The NPR is worse: little (if any) of the route is sealed, and a ferry is required to cross the Jardine River (approaching the road's northern terminus), even at the height of the dry season.

The main road up Cape York Peninsula.
The main road up Cape York Peninsula.
Image source: Eco Citizen Australia.

A proper Cape York Highway, all the way from Lakeland to The Tip, is in my opinion bound to get built eventually. I've seen mention of a prediction that we should expect it done by 2050; if that estimate can be met, I'd call it a great achievement. To bring the Cape's main roads up to highway standard, they'd need to be sealed all the way, and there would need to be reasonably high bridges over all the rivers. Considering the very extreme weather patterns up that way, the route will never be completely flood-proof (much as the fully-sealed Barkly Highway through the Gulf of Carpentaria, south of the Cape, isn't flood-proof either); but if a journey all the way to The Tip were possible in a 2WD vehicle for most of the year, that would be a grand accomplishment.

High-speed rail on the Eastern seaboard

Of all the proposals being put forward here, this is by far the most well-known and the most oft talked about. Many Australians are in agreement with me, on the fact that a high-speed rail link along the east coast is sorely needed. Sydney to Canberra is generally touted as an appropriate first step, Sydney to Melbourne is acknowledged as the key component, and Sydney to Brisbane is seen as a very important extension.

There's a dearth of commentary out there regarding this idea, so I'll refrain from going into too much detail. In particular, the topic has been flooded with conversation since the fairly recent (2013) government-funded feasibility study (to the tune of AUD$20 million) into the matter.

Sadly, despite all the good news – the glowing recommendations of the government study; the enthusiasm of countless Australians; and some valiant attempts to stave off inertia – Australia has been waiting for high-speed rail an awfully long time, and it's probably going to have to keep on waiting. Because, with the cost of a complete Brisbane-Sydney-Canberra-Melbourne network estimated at around AUD$100 billion, neither the government nor anyone else is in a hurry to cough up the requisite cash.

This is the only proposal in this article, about an infrastructure link to complement another one (of the same mode) that already exists. I've tried to focus on links that are needed where currently there is nothing at all. However, I feel that this propoal belongs here, because despite its proud and important history, the ageing eastern seaboard rail network is rapidly becoming an embarrassment to the nation.

This Greens flyer says it better than I can.
This Greens flyer says it better than I can.
Image source: Adam Bandt MP.

The corner of Australia where 90% of the population live, deserves (and needs) a train service for the future, not one that belongs in a museum. The east coast interstate trains still run on diesel, as the lines aren't even electrified outside of the greater metropolitan areas. The network's few (remaining) passenger services share the line with numerous freight trains. There are still a plethora of old-fashioned level crossings. And the majority of the route is still single-track, causing regular delays and seriously limiting the line's capacity. And all this on two of the world's busiest air routes, with the road routes also struggling under the load.

Come on, Aussie – let's join the 21st century!

Self-sustaining desert towns

My final idea, some may consider a little kookoo, but I truly believe that it would be of benefit to our great sunburnt country. As should be clear by now, immense swathes of Australia are empty desert. There are many dusty roads and 4WD tracks traversing the country's arid centre, and it's not uncommon for some of the towns along these routes to be 1,000km's or more distant from their nearest neighbour. This results in communities (many of them indigenous) that are dangerously isolated from each other and from critical services; it makes for treacherous vehicle journeys, where travellers must bring extra necessities such as petrol and water, just to last the distance; and it means that Australia as a whole suffers from more physical disconnects, robbing contiguity from our otherwise unified land.

Many outback communities and stations are a long way from their nearest neighbours.
Many outback communities and stations are a long way from their nearest neighbours.
Image source: news.com.au.

Good transport networks (road and rail) across the country are one thing, but they're not enough. In my opinion, what we need to do is to string out more desert towns along our outback routes, in order to reduce the distances of no human contact, and of no basic services.

But how to support such towns, when most outback communities are struggling to survive as it is? And how to attract more people to these towns, when nobody wants to live out in the bush? In my opinion, with the help of modern technology and of alternative agricultural methods, it could be made to work.

Towns need a number of resources in order to thrive. First and foremost, they need water. Securing sufficient water in the outback is a challenge, but with comprehensive conservation rules, and modern water reuse systems, having at least enough water for a small population's residential use becomes feasible, even in the driest areas of Australia. They also need electricity, in order to use modern tools and appliances. Fortunately, making outback towns energy self-sufficient is easier than it's ever been before, thanks to recent breakthroughs in solar technology. A number of these new technologies have even been pilot-tested in the outback.

In order to be self-sustaining, towns also need to be able to cultivate their own food in the surrounding area. This is a challenge in most outback areas, where water is scarce and soil conditions are poor. Many remote communities rely on food and other basic necessities being trucked in. However, a number of recent initiatives related to desert greening may help to solve this thorny (as an outback spinifex) problem.

Most promising is the global movement (largely founded and based in Australia) known as permaculture. A permaculture-based approach to desert greening has enjoyed a vivid and well-publicised success on several occasions; most notably, Geoff Lawton's project in the Dead Sea Valley of Jordan about ten years ago. There has been some debate regarding the potential ability of permaculture projects to green the desert in Australia. Personally, I think that the pilot projects to date have been very promising, and that similar projects in Australia would be, at the least, a most worthwhile endeavour. There are also various other projects in Australia that aim to create or nurture green corridors in arid areas.

There are also crazy futuristic plans for metropolis-size desert habitats, although these fail to explain in detail how such habitats could become self-sustaining. And there are some interesting projects in place around the world already, focused on building self-sustaining communities.

As for where to build a new corridor of desert towns, my preference would be to target an area as remote and as spread-out as possible. For example, along the Great Central Road (which is part of the "Outback Highway"). This might be an overly-ambitious route, but it would certainly be one of the most suitable.

And regarding the "tough nut" of how to attract people to come and live in new outback towns – when it's hard enough already just to maintain the precarious existing population levels – I have no easy answer. It has been suggested that, with the growing number of telecommuters in modern industries (such as IT), and with other factors such as the high real estate prices in major cities, people will become increasingly likely to move to the bush, assuming there's adequately good-quality internet access in the respective towns. Personally, as an IT professional who has worked remotely on many occasions, I don't find this to be a convincing enough argument.

I don't think that there's any silver bullet to incentivising a move to new desert towns. "Candy dangling" approaches such as giving away free houses in the towns, equipping buildings with modern sustainable technologies, or even giving cash gifts to early pioneers – these may be effective in getting a critical mass of people out there, but it's unlikely to be sufficient to keep them there in the long-term. Really, such towns would have to develop a local economy and a healthy local business ecosystem in order to maintain their residents; and that would be a struggle for newly-built towns, the same as it's been a struggle for existing outback towns since day one.

In summary

Love 'em or hate 'em, admire 'em or attack 'em, there's my list of five infrastructure projects that I think would be of benefit to Australia. Some are more likely to happen than others; unfortunately, it appears that none of them is going to be fully realised any time soon. Feedback welcome!

]]>
Conditionally adding HTTP response headers in Flask and Apache https://greenash.net.au/thoughts/2014/12/conditionally-adding-http-response-headers-in-flask-and-apache/ Mon, 29 Dec 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/12/conditionally-adding-http-response-headers-in-flask-and-apache/ For a Flask-based project that I'm currently working on, I just added some front-end functionality that depends on Font Awesome. Getting Font Awesome to load properly (in well-behaved modern browsers) shouldn't be much of a chore. However, my app spans multiple subdomains (achieved with the help of Flask's Blueprints per-subdomain feature), and my static assets (CSS, JS, etc) are only served from one of those subdomains. And as it turns out (and unlike cross-domain CSS / JS / image requests), cross-domain font requests are forbidden unless the font files are served with an appropriate Access-Control-Allow-Origin HTTP response header. For example, this is the error message that's shown in Google Chrome for such a request:

Font from origin 'http://foo.local' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://bar.foo.local' is therefore not allowed access.

As a result of this, I had to quickly learn how to conditionally add custom HTTP response headers based on the URL being requested, both for Flask (when running locally with Flask's built-in development server), and for Apache (when running in staging and production). In a typical production Flask setup, it's impossible to do anything at the Python level when serving static files, because these are served directly by the web server (e.g. Apache, Nginx), without ever hitting WSGI. Conversely, in a typical development setup, there is no web server running separately to the WSGI app, and so playing around with static files must be done at the Python level.

The code

For a regular Flask request that's handled by one of the app's custom routes, adding another header to the HTTP response would be a simple matter of modifying the flask.Response object before returning it. However, static files (in a development setup) are served by Flask's built-in app.send_static_file() function, not by any route that you have control over. So, instead, it's necessary to intercept the response object via Flask's API.

Fortunately, this interception is easily accomplished, courtesy of Flask's app.after_request() function, which can either be passed a callback function, or used as a decorator. Here's what did the trick for me:

import re

from flask import Flask
from flask import request


app = Flask(__name__)

def add_headers_to_fontawesome_static_files(response):
    """
    Fix for font-awesome files: after Flask static send_file() does its
    thing, but before the response is sent, add an
    Access-Control-Allow-Origin: *
    HTTP header to the response (otherwise browsers complain).
    """

    if (request.path and
        re.search(r'\.(ttf|woff|svg|eot)$', request.path)):
        response.headers.add('Access-Control-Allow-Origin', '*')

    return response

if app.debug:
    app.after_request(add_headers_to_fontawesome_static_files)

For a production setup, the above Python code achieves nothing, and it's therefore necessary to add something like this to the config file for the app's VirtualHost:

<VirtualHost *:80>
  # ...

  Alias /static /path/to/myapp/static
  <Location /static>
    Order deny,allow
    Allow from all
    Satisfy Any

    SetEnvIf Request_URI "\.(ttf|woff|svg|eot)$" is_font_file
    Header set Access-Control-Allow-Origin "*" env=is_font_file
  </Location>
</VirtualHost>

Done

And there you go: an easy way to add custom HTTP headers to any response, in two different web server environments, based on a conditional request path. So far, cleanly serving cross-domain font files is all that I've neede this for. But it's a very handy little snippet, and no doubt there are plenty of other scenarios in which it could save the day.

]]>
Forgotten realms of the Oxus region https://greenash.net.au/thoughts/2014/10/forgotten-realms-of-the-oxus-region/ Sat, 11 Oct 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/10/forgotten-realms-of-the-oxus-region/ In classical antiquity, a number of advanced civilisations flourished in the area that today comprises parts of Turkmenistan, Uzbekistan, Tajikistan, and Afghanistan. Through this area runs a river most commonly known by its Persian name, as the Amu Darya. However, in antiquity it was known by its Greek name, as the Oxus (and in the interests of avoiding anachronism, I will be referring to it as the Oxus in this article).

The Oxus region is home to archaeological relics of grand civilisations, most notably of ancient Bactria, but also of Chorasmia, Sogdiana, Margiana, and Hyrcania. However, most of these ruined sites enjoy far less fame, and are far less well-studied, than comparable relics in other parts of the world.

I recently watched an excellent documentary series called Alexander's Lost World, which investigates the history of the Oxus region in-depth, focusing particularly on the areas that Alexander the Great conquered as part of his legendary military campaign. I was blown away by the gorgeous scenery, the vibrant cultural legacy, and the once-majestic ruins that the series featured. But, more than anything, I was surprised and dismayed at the extent to which most of the ruins have been neglected by the modern world – largely due to the region's turbulent history of late.

Ayaz Kala (fortress 2) of Khwarezm (Chorasmia), today desert but in ancient times green and lush.
Ayaz Kala (fortress 2) of Khwarezm (Chorasmia), today desert but in ancient times green and lush.
Image source: Stantastic: Back to Uzbekistan (Khiva).

This article has essentially the same aim as that of the documentary: to shed more light on the ancient cities and fortresses along the Oxus and nearby rivers; to get an impression of the cultures that thrived there in a bygone era; and to explore the climate change and the other forces that have dramatically affected the region between then and now.

Getting to know the rivers

First and foremost, an overview of the major rivers in question. Understanding the ebbs and flows of these arteries is critical, as they are the lifeblood of a mostly arid and unforgiving region.

Overview map of the Oxus region, showing major rivers and antiquity-era archaeological sites.
Overview map of the Oxus region, showing major rivers and antiquity-era archaeological sites.
Map: Forgotten realms of the Oxus region (Google Maps Engine). Satellite imagery courtesy of Google Earth.

The Oxus is the largest river (by water volume) in Central Asia. Due to various geographical factors, it's also changed its course more times (and more dramatically) than any other river in the region, and perhaps in the world.

The source of the Oxus is the Wakhan river, which begins at Baza'i Gonbad at the eastern end of Afghanistan's remote Wakhan Corridor, often nicknamed "The Roof of the World". This location is only 40km from the tiny and seldom-crossed Sino-Afghan border. Although the Wakhan river valley has never been properly "civilised" – neither by ancient empires nor by modern states (its population is as rugged and nomadic today as it was millenia ago) – it has been populated continuously since ancient times.

Next in line downstream is the Panj river, which begins where the Wakhan and Pamir rivers meet. For virtually its entire length, the Panj follows the Afghanistan-Tajikistan border; and it winds a zig-zag course through rugged terrain for much of its course, until it leaves behind the modern-day Badakhstan province towards its end. Like the Wakhan, the mountainous upstream part of the Panj was never truly conquered; however, the more accessible downstream part was the eastern frontier of ancient Bactria.

Kaakha fortress, overlooking the Panj river.
Kaakha fortress, overlooking the Panj river.
Image soure: Photos Khandud (Hava Afghanistan).

The Oxus proper begins where the Panj and Vakhsh rivers meet, on the Afghanistan-Tajikistan border. It continues along the Afghanistan-Uzbekistan border, and then along the Afghanistan-Turkmenistan border, until it enters Turkmenistan where the modern-day Karakum Canal begins. The river's course has been fairly stable along this part of the route throughout recorded history, although it has made many minor deviations, especially further downstream where the land becomes flatter. This part of the river was the Bactrian heartland in antiquity.

The rest of the river's route – mainly through Turkmenistan, but also hugging the Uzbek border in sections, and finally petering out in Uzbekistan – traverses the flat, arid Karakum Desert. The course and strength of the river down here has changed constantly over the centuries; for this reason, the Oxus has earned the nickname "the mad river". In ancient times, the Uzboy river branched off from the Oxus in northern Turkmenistan, and ran west across the desert until (arguably) emptying into the Caspian Sea. However, the strength of the Uzboy gradually lessened, until the river completely perished approximately 400 years ago. It appears that the Uzboy was considered part of the Oxus (and was given no name of its own) by many ancient geographers.

The Oxus proper breaks up into an extensive delta once in Uzbekistan; and for most of recorded history, it emptied into the Aral Sea. However, due to an aggressive Soviet-initiated irrigation campaign since the 1950s (the culmination of centuries of Russian planning and dreaming), the Oxus delta has rescinded significantly, and the river's waters fizzle out in the desert before reaching the sea. This is one of the major causes of the death of the Aral Sea, one of the worst environmental disasters on the planet.

Although geographically separate, the nearby Murghab river is also an important part of the cultural and archaeological story of the Oxus region (its lower reaches in modern-day Turkmenistan, at least). From its confluence with the Kushk river, the Murghab meanders gently down through semi-arid lands, before opening into a large delta that fans out optimistically into the unforgiving Karakum desert. The Murghab delta was in ancient times the heartland of Margiana, an advanced civilisation whose heyday largely predates that of Bactria, and which is home to some of the most impressive (and under-appreciated) archaeological sites in the region.

I won't be covering it in this article, as it's a whole other landscape and a whole lot more history; nevertheless, I would be remiss if I failed to mention the "sister" of the Oxus, the Syr Darya river, which was in antiquity known by its Greek name as the Jaxartes, and which is the other major artery of Central Asia. The source of the Jaxartes is (according to some) not far from that of the Oxus, high up in the Pamir mountains; from there, it runs mainly north-west through Kyrgyzstan, Uzbekistan, and then (for more than half its length) Kazakhstan, before approaching the Aral Sea from the east. Like the Oxus, the present-day Jaxartes also peters out before reaching the Aral; and since these two rivers were formerly the principal water sources of the Aral, that sea is now virtually extinct.

Hyrcania

Having finished tracing the rivers' paths from the mountains to the desert, I will now – Much like the documentary – explore the region's ancient realms the other way round, beginning in the desert lowlands.

The heartland of Hyrcania (more often absorbed into a map of ancient Parthia, than given its own separate mention) – in ancient times just as in the present-day – is Golestan province, Iran, which is a fertile and productive area on the south-west shores of the Caspian. This part of Hyrcania is actually outside of the Oxus region, and so falls off this article's radar. However, Hyrcania extended north into modern-day Turkmenistan, reaching the banks of the then-flowing Uzboy river (which was ambiguously referred to by Greek historians as the "Ochos" river).

Settlements along the lower Uzboy part of Hyrcania (which was on occasion given a name of its own, Nesaia) were few. The most notable surviving ruin there is the Igdy Kala fortress, which dates to approximately the 4th century BCE, and which (arguably) exhibits both Parthian and Chorasmian influence. Very little is known about Igdy Kala, as the site has seldom been formally studied. The question of whether the full length of the Uzboy ever existed remains unresolved, particularly regarding the section from Sarykamysh Lake to Igdy Kala.

By including Hyrcania in the Oxus region, I'm tentatively siding with those that assert that the "greater Uzboy" did exist; if it didn't (i.e. if the Uzboy finished in Sarykamysh Lake, and if the "lower Uzboy" was just a small inlet of the Caspian Sea), then the extent of cultural interchange between Hyrcania and the actual Oxus realms would have been minimal. In the documentary, narrator David Adams is quite insistent that the Oxus was connected to the Caspian in antiquity, making frequent reference to the works of Patrocles; while this was quite convenient for the documentary's hypothesis that the Oxo-Caspian was a major trade route, the truth is somewhat less black-and-white.

Chorasmia

North-east of Hyrcania, crossing the lifeless Karakum desert, lies Chorasmia, better known for most of its history as Khwarezm. Chorasmia lies squarely within the Oxus delta; although the exact location of its capitals and strongholds has shifted considerably over the centuries, due to political upheavals and due to changes in the delta's course. In antiquity (particularly in the 4th and 3rd centuries BCE), Chorasmia was a vassal state of the Achaemenid Persian empire, much like the rest of the Oxus region; the heavily Persian-influenced language and culture of Chorasmia, which can still be faintly observed in modern times, harks back to this era.

This region was strongest in medieval times, and its medieval capital at the present-day ghost city of Konye-Urgench – known in its heyday as Gurganj – was Chorasmia's most significant seat of power. Gurganj was abandoned in the 16th century CE, when the Oxus changed course and left the once-fertile city and surrounds without water.

It's unknown exactly where antiquity-era Chorsamia was centred, although part of the ruins of Kyrk Molla at Gurganj date back to this period, as do part of the ruins of Itchan Kala in present-day Khiva (which was Khwarezm's capital from the 17th to the 20th centuries CE). Probably the most impressive and best-preserved ancient ruins in the region, are those of the Ayaz Kala fortress complex, parts of which date back to the 4th century BCE. There are numerous other "Kala" (the Chorasmian word for "fortress") nearby, including Toprak Kala and Kz'il Kala.

One of the less-studied sites – but by no means a less significant site – is Dev-kesken Kala, a fortress lying due west of Konye-Urgench, on the edge of the Ustyurt Plateau, overlooking the dry channel of the former upper Uzboy river. Much like Konye-Urgench (and various other sites in the lower Oxus delta), Dev-kesken Kala was abandoned when the water stopped flowing, in around the 16th century CE. The city was formerly known as Vazir, and it was a thriving hub of medieval Khwarezm. Also like other sites, parts of the fortress date back to the 4th century BCE.

Dev-kesken Kala as photographed by Soviet Archaeologist Sergey Tolstov in 1947.
Dev-kesken Kala as photographed by Soviet Archaeologist Sergey Tolstov in 1947.
Image source: Karakalpak: Devkesken qala and Vazir.

I should also note that Dev-kesken Kala was one of the most difficult archaeological sites (of all the sites I'm describing in this article) to find information online for. I even had to create the Dev-Kesken Wikipedia article, which previously didn't exist (my first time creating a brand-new page there). The site was also difficult to locate on Google Earth (should now be easier, the co-ordinates are saved on the Wikipedia page). The site is certainly under-studied and under-visited, considering its distinctive landscape and its once-proud history; however, it is remote and difficult to access, and I understand that this Uzbek-Turkmen frontier area is also rather unsafe, due to an ongoing border dispute.

Margiana

South of Chorasmia – crossing the Karakum desert once again – one will find the realm that in antiquity was known as Margiana (although that name is simply the hellenised version of the original Persian name Margu). Much like Chorasmia, Margiana is centred on a river delta in an otherwise arid zone; in this case, the Murghab delta. And like the Oxus delta, the Murghab delta has also dried up and rescinded significantly over the centuries, due to both natural and human causes. Margiana lies within what is present-day southern Turkmenistan.

Although the Oxus doesn't run through Margiana, the realm is nevertheless part of the Oxus region (more surely so than Hyrcania, through which a former branch of the Oxus arguably runs), for a number of reasons. Firstly, it's geographically quite close to the Oxus, with only about 200km of flat desert separating the two. Secondly, the Murghab and the Oxus share many geographical traits, such as their arid deltas (as mentioned above), and also their habit of frequently and erratically changing course. Lastly, and most importantly, there is evidence of significant cultural interchange between Margiana and the other Oxus realms throughout civilised history.

The political centre of Margiana – in the antiquity period and for most of the medieval period, too – was the city of Merv, which was known then as Gyaur Kala (and also briefly by its hellenised name, Antiochia Margiana). Today, Merv is one of the largest and best-preserved archaeological sites in all the Oxus region, although most of the visible ruins are medieval, and the older ruins still lie largely buried underneath. The site has been populated since at least the 5th century BCE.

Although Merv was Margiana's capital during the Persian and Greek periods, the site of most significance around here is the far older Gonur Tepe. The site of Gonur was completely unknown to modern academics until the 1970s, when the legendary Soviet archaeologist Viktor Sarianidi discovered it (Sarianidi sadly passed away less than a year ago, aged 84). Gonur lies in what is today the parched desert, but what was – in Gonur's heyday, in approximately 2,000 BCE – well within the fertile expanse of the then-greater Murghab delta.

The vast bronze age complex of Gonur Tepe.
The vast bronze age complex of Gonur Tepe.
Image source: Boskawola.

Gonur was one of the featured sites in the documentary series – and justly so, because it's the key site of the so-called Bactria-Margiana Archaeological Complex. It's also a prime example of a "forgotten realm" in the region: to this day, few tourists and journalists have ever visited it (David Adams and his crew were among those that have made the arduous journey); and, apart from Sarianidi (who dedicated most of his life to studying Gonur and nearby ruins), few archaeologists have explored the site, and insufficient effort is being made by authorities and by academics to preserve the crumbling ruins. All this is a tragedy, considering that some have called for bronze-age Margiana to be added to the list of the classic "cradles of civilisation", which includes Egypt, Babylon, India, and China.

There are many other ruins in Margiana that were part of the bronze-age culture centred in Gonur. One of the other more promiment sites is Altyn Tepe, which lies about 200km south-west of Gonur, still within Turkmenistan but close to the Iranian border. Altyn Tepe, like Gonur, reached its zenith around 2,000 BCE; the site is characterised by a large Babylon-esque ziggurat. Altyn Tepe was also studied extensively by Sarianidi; and it too has been otherwise largely overlooked during modern times.

Sogdiana

Crossing the Karakum desert again (for the last time in this article) – heading north-east from Margiana – and crossing over to the northern side of the Oxus river, one may find the realm that in antiquity was known as Sogdiana (or Sogdia). Sogdiana principally occupies the area that is modern-day southern Uzbekistan and western Tajikistan.

The Sogdian heartland is the fertile valley of the Zeravshan river (old Persian name), which was once known by its Greek name, as the Polytimetus, and which has also been called the Sughd, in honour of its principal inhabitants (modern-day Tajikistan's Sughd province, through which the river runs, likewise honours them).

The Zeravshan's source is high in the mountains near the Tajik-Kyrgyz border, and for its entire length it runs west, passing through the key ancient Sogdian cities of Panjakent, Samarkand (once known as Maracanda), and Bukhara (which all remain vibrant cities to this day), before disappearing in the desert sands approaching the Uzbek-Turkmen border. The Zeravshan probably reached the Oxus and emptied into it – once upon a time – near modern-day Türkmenabat, which in antiquity was known as Amul, and in medieval times as Charjou. For most of its history, Amul lay just beyond the frontiers of Sogdiana, and it was the crossroads of all the principal realms of the Oxus region mentioned in this article.

Although Sogdiana is an integral part of the Oxus region, and although it was a dazzling civilisation in antiquity (indeed, it was arguably the most splendid of all the Oxus realms), I'm only mentioning it in this article for completeness, and I will refrain from exploring its archaeology in detail. (You may also have noted that the Zeravshan river and the Sogdian cities are missing from my custom map of the region). This is because I don't consider Sogdiana to be "forgotten", in anywhere near the sense that the other realms are "forgotten".

The key Sogdian sites – particularly Samarkand and Bukhara, which are both today UNESCO-listed – enjoy international fame; they have been studied intensively by modern academics; and they are the biggest tourist attractions in all of Central Asia. Apart from Sogdiana's prominence in Silk Road history, and its impressive and well-preserved architecture, the relative safety and stability of Uzbekistan – compared with its fellow Oxus-region neighbours Turkmenistan, Tajikistan, and Afghanistan – has resulted in the Sogdian heartland receiving the attention it deserves from the curious modern world.

Also – putting aside its "not forgotten" status – the partial exclusion (or, perhaps more accurately, the ambivalent inclusion) of Sogdiana from the Oxus region has deep historical roots. Going back to Achaemenid Persian times, Sogdiana was the extreme northern frontier of Darius's empire. And when the Greeks arrived and began to exert their influence, Sogdiana was known as Transoxiana, literally meaning "the land across the Oxus". Thus, from the point of view of the two great powers that dominated the region in antiquity – the Persians and the Greeks – Sogdiana was considered as the final outpost: a buffer between their known, civilised sphere of control; and the barbarous nomads who dwelt on the steppes beyond.

Bactria

Finally, after examining the other realms of the Oxus region, we come to the land that was the region's showpiece in the antiquity period: Bactria. The Bactrian heartland can be found south of Sogdiana, separated from it by the (relatively speaking) humble Chul'bair mountain range. Bactria occupies a prime position along the Oxus river: that is, it's the first section lying downstream of overly-rugged terrain; and it's upstream enough that it remains quite fertile to this day, although it's significantly less fertile than it was millennia ago. Bactria falls principally within modern-day northern Afghanistan; but it also encroaches into southern Uzbekistan and Tajikistan.

Historians know more about antiquity-era Bactria than they do about the rest of the Oxus region, primarily because Bactria was better incorporated into the great empires of that age than were its neighbours, and therefore far more written records of Bactria have survived. Bactria was a semi-autonomous satrapy (province) of the Persian empire since at least the 6th century BCE, although it was probably already under Persian influence well before then. It was conquered by Alexander the Great in 328 BCE (after he had already marched through Sogdiana the year before), thus marking the start of Greco-Bactrian rule, making Bactria the easternmost hellenistic outpost of the ancient world.

However, considering its place in the narrative of these empires, and considering its being recorded by both Persian and Greek historians, surprisingly little is known about the details of ancient Bactria today. This is why the documentary was called "Alexander's Lost World". Much like its neighbours, the area comprising modern-day Bactria is relatively seldom visited and seldom studied, due to its turbulent recent history.

The first archaeological site that I'd like to discuss in this section is that of Kampyr Tepe, which lies on the northern bank of the Oxus (putting it within Uzbek territory), just downstream from modern-day Termez. Kampyr Tepe was constructed around the 4th century BCE, possibly initially as a garrison by Alexander's forces. It was a thriving city for several centuries after that. It would have been an important defensive stronghold in antiquity, lying as it does near the western frontier of Bactria proper, not far from the capital, and affording excellent views of the surrounding territory.

Kampyr Tepe overlooking the fertile valley of the Bactrian Oxus.
Kampyr Tepe overlooking the fertile valley of the Bactrian Oxus.
Image source: rusanoff – Photo of Kampyrtepa.

There is evidence that a number of different religious groups co-existed peacefully in Kampyr Tepe: relics of Hellenism, Zoroastrianism, and Buddhism from similar time periods have been discovered here. The ruins themselves are in good condition, especially considering the violence and instability that has affected the site's immediate surroundings in recent history. However, the reason for the site's admirable state of preservation is also the reason for its inaccessibility: due to its border location, Kampyr Tepe is part of a sensitive Uzbek military-controlled zone, and access is highly restricted.

The capital of Bactria was the grand city of Bactra, the location of which is generally accepted to be a circular plateau of ruins touching the northern edge of the modern-day city of Balkh. These lie within the delta of the modern-day Balkh river (once known as the Bactrus river), about 70km south of where the Oxus presently flows. In antiquity, the Bactrus delta reached the Oxus and fed into it; but the modern-day Balkh delta (like so many other deltas mentioned in this article) fizzles out in the sand.

Today, the most striking feature of the ruins is the 10km-long ring of thick, high walls enclosing the ancient city. Balkh is believed to have been inhabited since at least the 27th century BCE, although most of the archaeological remains only date back to about the 4th century BCE. The ruins at Balkh are currently on UNESCO's tentative World Heritage list. It's likely that the plateau at Balkh was indeed ancient Bactra; however, this has never been conclusively proven. Modern archaeologists barely had any access to the site until 2003, due to decades of military conflict in the area. To this day, access continues to be highly restricted, for security reasons.

The formidable walls surrounding the ruins of Bactra, adjacent to modern-day Balkh.
The formidable walls surrounding the ruins of Bactra, adjacent to modern-day Balkh.
Image source: Hazara Association of UK.

Bactria was an important centre of Zoroastianism, and Bactra is one of (and is the most likely of) several contenders claiming to be the home of the mythical prophet Zoroaster. Tentatively related to this, is the fact that Bactra was also (possibly) once known as Zariaspa. A few historians have gone further, and have suggested that Bactra and Zariaspa were two different cities; if this is the case, then a whole new can of worms is opened, because it begs a multitude of further questions. Where was Zariaspa? Was Bactra at Balkh, and Zariaspa elsewhere? Or were Bactra and Zariaspa actually the same city… but located elsewhere?

Based on the theory of Ptolemy (and perhaps others), in the documentary David Adams strongly hypothesises that: (a) Bactra and Zariaspa were twin cities, next to each other; (b) the twin-city of Bactra-Zariaspa was located somewhere on the Oxus north of Balkh (he visits and proposes such a site, which I believe was somewhere between modern-day Termez and Aiwanj); and (c) this site, rather than Balkh, was the capital of the Greco-Bactrian kingdom that followed Alexander's conquest. While this is certainly an interesting hypothesis – and while it's true that there hasn't been nearly enough excavation or analysis done in modern times to rule it out – the evidence and the expert opinion, as it stands today, would suggest that Adams's hypothesis is wrong. As such, I think that his assertion of "the lost city of Bactra-Zariaspa" lying on the Oxus, rather than in the Bactrus delta, was stated rather over-confidently and with insufficient disclaimers in the documentary.

Upper Bactria

Although not historically or culturally distinct from the Bactrian heartland, I'm analysing "Upper Bactria" (i.e. the part of Bactria upstream of modern-day Balkh province) separately here, primarily to maintain structure in this article, but also because this farther-flung part of the realm is geographically quite rugged, in contrast to the heartland's sweeping plains.

First stop in Upper Bactria is the archaeological site of Takhti Sangin. This ancient ruin can be found on the Tajik side of the border; and since it's located at almost the exact spot where the Panj and Vakhsh rivers meet to become the Oxus, it could also be said that Takhti Sangin is the last site along the Oxus proper that I'm examining. However, much like the documentary, I'll be continuing the journey further upstream to the (contested) "source of the Oxus".

The ruins of Takhti Sangin.
The ruins of Takhti Sangin.
Image source: MATT: From North to South.

The principal structure at Takhti Sangin was a large Zoroastrian fire temple, which in its heyday boasted a pair of constantly-burning torches at its main entrance. Most of the remains at the site date back to the 3rd century BCE, when it became an important centre in the Greco-Bactrian kingdom (and when it was partially converted into a centre of Hellenistic worship); but the original temple is at least several centuries older than this, as attested to by various Achaemenid Persian-era relics.

Takhti Sangin is also the place where the famous "Oxus treasure" was discovered in the British colonial era (most of the treasure can be found on display at the British Museum to this day). In the current era, visitor access to Takhti Sangin is somewhat more relaxed than is access to the Bactrian sites further downstream (mentioned above) – there appear to be tour operators in Tajikistan running regularly-scheduled trips there – but this is also a sensitive border area, and as such, access is controlled by the Tajik military (who maintain a constant presence). Much like the sites of the Bactrian heartland, Takhti Sangin has been studied only sporadically by modern archaeologists, and much remains yet to be clarified regarding its history.

Moving further upstream, to the confluence of the Panj and Kokcha rivers, one reaches the site of Ai-Khanoum (meaning "Lady Moon" in modern Uzbek), which is believed (although not by all) to be the legendary city that was known in antiquity as Alexandria Oxiana. This was the most important Greco-Bactrian centre in Upper Bactria: it was built in the 3rd century BCE, and appears to have remained relatively vibrant for several centuries thereafter. It's also the site furthest upstream on the Oxus, for which there is significant evidence to indicate a Greco-Bactrian presence. It's a unique site within the Oxus region, in that it boasts the typical urban design of a classical Greek city; it's virtually "a little piece of Greece" in Central Asia. It even housed an amphitheatre and a gymnasium.

Ai-Khanoum certainly qualifies as a "lost" city: it was unknown to all save some local tribespeople, until the King of Afghanistan chanced upon it during a hunting trip in 1961. Due primarily to the subsequent Afghan-Soviet war, the site has been poorly studied (and also badly damaged) since then. In the documentary, it's explained how (and illustrated with some impressive 3D animation) – according to some – the Greco-Bactrian city was built atop the ruins of an older city, probably of Persian origin, which was itself once a dazzling metropolis. The documentary also indicates that access to Ai-Khanoum is currently tricky, and must be coordinated with the Afghan military; the site itself is also difficult to physically reach, as it's basically an island amongst the rivers that converge around it, depending on seasonal fluctuations.

The final site that I'd like to discuss regarding the realm of Bactria, is that of Sar-i Sang (a name meaning "place of stone"). At this particularly remote spot in the mountains of Badakhstan, there barely exists a town, neither today nor in ancient times. The nearest settlement of any size is modern-day Fayzabad, the provincial capital. From Ai-Khanoum, the Kokcha river winds upstream and passes through Fayzabad; and from there, the Kokcha valley continues its treacherous path up into the mountains, with Sar-i Sang located about 100km's south of Fayzabad.

Sar-i Sang is not a town, it's a mine: at an estimated 7,000 years of age, it's believed to be the oldest continuously-operating mine in the world. Throughout recorded history, people have come here seeking the beautiful precious stone known as lapis lazuli, which exists in veins of the hillsides here in the purest form and in the greatest quantity known on Earth.

Lapis lazuli for sale in Peshawar, Pakistan, all brought directly from Sar-i Sang.
Lapis lazuli for sale in Peshawar, Pakistan, all brought directly from Sar-i Sang.
Image source: Jewel Tunnel Imports: Lapis Lazuli.

Although Sar-i Sang (also known as Darreh-Zu) is quite distant from all the settlements of ancient Bactria (and quite distant from the Oxus), the evidence suggests that throughout antiquity the Bactrians worked the mines here, and that lapis lazuli played a significant role in Bactria's international trade. Sar-i Sang lapis lazuli can be found in numerous famous artifacts of other ancient empires, including the tomb of Tutankhamun in Egypt. Sources also suggest that this distinctive stone was Bactria's most famous export, and that it was universally associated with Bactria, much like silk was associated with China.

The Wakhan

Having now discussed the ancient realms of the Oxus from all the way downstream in the hot desert plains, there remains only one segment of this epic river left to explore: its source far upstream. East of Bactria lies one of the most inaccessible, solitary, and unspoiled places in the world: the Wakhan Corridor. Being the place where a number of very tall mountain ranges meet – among them the Pamirs, the Hindu Kush, and the Karakoram – the Wakhan has often been known as "The Roof of the World".

The Wakhan today is a long, slim "panhandle" of territory within Afghanistan, bordered to the north by Tajikistan, to the south by Pakistan, and to the east by China. This distinctive borderline was a colonial-era invention, a product of "The Great Game" played out between Imperial Russia and Britain, designed to create a buffer zone between these powers. Historically, however, the Wakhan has been nomadic territory, belonging to no state or empire, and with nothing but the immensity of the surrounding geography serving as its borders (as it continues to be on the ground to this day). The area is also miraculously bereft of the scourges of war and terrorism that have plagued the rest of Afghanistan in recent years.

Panorama of the spectacular Wakhan valley at the village of Sarhad.
Panorama of the spectacular Wakhan valley at the village of Sarhad.
Image source: Geoffrey Degens: Wakhan Valley at Sarhad.

Much like the documentary, my reasons for discussing the Wakhan are primarily geographic ones. The Wakhan is centred around a single long valley, whose river – today known as the Panj, and then higher up as the Wakhan river – is generally recognised as the source of the Oxus. It's important to acknowledge this high-altitude area, which plays such an integral role in feeding the river that diverse cultures further downstream depend upon, and which has fuelled the Oxus's long and colourful history.

There are few significant archaeological sites within the Wakhan. The ancient Kaakha fortress falls just outside the Wakhan proper, at the extreme eastern-most extent of the influence of antiquity-era kingdoms in the Oxus region. The only sizeable settlement in the Wakhan itself is the village of Sarhad, which has been continuously inhabited for millennia, and which is the base of the unique Wakhi people, who are the Wakhan's main tribe (Sarhad is also where the single rough road along the Wakhan valley ends). Just next to Sarhad lies the Kansir fort, built by the Tibetan empire in the 8th century CE, a relic of the battle that the Chinese and Tibetan armies fought in the Wakhan in the year 747 CE (this was probably the most action that the Wakhan has ever seen in its history).

Close to the very spot where the Wakhan river begins is Baza'i Gonbad (or Bozai Gumbaz, in Persian "domes of the elders"), a collection of small, ancient mud-brick domes about which little is known. As there's nothing else around for miles, they are occasionally used to this day as travellers' shelters. They are believed to be the oldest structures in the Wakhan, but it's unclear who built them (it was probably one of the nomadic Kyrgyz tribes that roam the area), or when.

The mysterious domed structures at Bozai Gumbaz.
The mysterious domed structures at Bozai Gumbaz.
Image source: David Adams Films: Bozai Gumbaz.

Regarding which famous people have visited the Wakhan throughout history: it appears almost certain that Marco Polo passed through the Wakhan in the 13th century CE, in order to reach China; and a handful of other Europeans visited the Wakhan in the subsequent centuries (and it's almost certain that the only "tourists" to ever visit the Wakhan are those of the past century or so). In the documentary, David Adams suggests repeatedly in the final episode (that in which he journeys to the Wakhan) that Alexander – either the man himself, or his legions – not only entered the Wakhan Corridor, but even crossed one of its high passes over to Pakistan. I've found no source to clearly corroborate this claim; and after posing the question to a forum of Alexander-philes, it appears quite certain that neither Alexander nor his legions ever set foot in the Wakhan.

Conclusion

So, there you have it: my humble overview of the history of a region ruled by rivers, empires, and treasures. As I've emphasied throughout this article, the Oxus region is most lamentably a neglected and under-investigated place, considering its colourful history and its rich tapestry of cultures and landscapes. My aim in writing this piece is simply to inform anyone else who may be interested, and to better preserve the region's proud legacy.

I must acknowledge and wholeheartedly thank David Adams and his team for producing the documentary Alexander's Lost World, which I have referred to throughout this article, and whose material I have re-analysed as the basis of my writings here. The series has been criticised by history buffs for its various inaccuracies and unfounded claims; and I admit that I too, in this article, have criticised it several times. However, despite this, I laud the series' team for producing a documentary that I enjoyed immensely, and that educated me and inspired me to research the Oxus region in-depth. Like the documentary, this article is about rivers and climate change as the primary forces of the region, and Alexander the Great (along with other famous historical figures) is little more than a sidenote to this theme.

I am by no means an expert on the region, nor have I ever travelled to it (I have only "vicariously" travelled there, by watching the documentary and by writing this article!). I would love to someday set my own two feet upon the well-trodden paths of the Oxus realms, and to see these crumbling testaments to long-lost g-ds and kings for myself. For now, however, armchair history blogging will have to suffice.

]]>
First experiences developing a single-page JS-driven web app https://greenash.net.au/thoughts/2014/08/first-experiences-developing-a-single-page-js-driven-web-app/ Tue, 26 Aug 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/08/first-experiences-developing-a-single-page-js-driven-web-app/ For the past few months, my main dev project has been a custom tool that imports metric data from a variety of sources (via APIs), and that generates reports showing that data in numerous graphical and tabular formats. The app is private (and is still in alpha), so I'm afraid I can't go into more detail than that at this time.

I decided (and I was encouraged by stakeholders) to build the tool as a single-page application, i.e. as a web app where almost all of the front-end is powered by JavaScript, and where the page is redrawn via AJAX calls and client-side templates. This was my first experience developing such an app; as such, I'd like to reflect on the choices I made, and on my understanding of the technology as it stands now.

Drowning in frameworks

I never saw one before in my life, and I hope I never see one of those fuzzy miserable things again.
I never saw one before in my life, and I hope I never see one of those fuzzy miserable things again.
Image source: Memory Alpha (originally from Star Trek TOS Season 2 Ep 13).

Building single-page applications is all the rage these days; as such, a gazillion frameworks have popped up, all promising to take the pain out of the dev work for you. In reality, when your problem is that you need to create an app, and you think: "I know, I'll go and choose a JS framework", now you have two problems.

Actually, that's not the full story either. When you choose the wrong JS* framework – due to it being unsuitable for your project, and/or due to your failing to grok it – and you have to look for a framework a second time, and port the code you've already started writing… now you've got three problems!

(* I'd prefer to just refer to these frameworks as "JS", rather than use the much-bandied-about term "MVC", because not all such frameworks are MVC, and because one's project may be unsuitable for client-side MVC anyway).

Ah, the joy of first-time blunders.

I started by choosing Ember.js. It's one of the most popular frameworks at the moment. It does everything you could possibly need for your funky new JS app. Turns out that: (a) Ember was complete overkill for my relatively simple app; and (b) despite my best efforts, I failed to grok Ember, and I felt that my time would be better spent switching to something else and thereafter working more efficiently, than continuing to grapple with Ember's philosophy and complexity.

In the end, I settled on Sammy.js. This is one of the lesser-known frameworks out there. It boasts far less features than Ember.js (and even so, I haven't used all that Sammy.js offers either). It doesn't get in the way of my app's functionality. Many of its features are just a thin wrapper on top of jQuery, which I already know intimately. It adds a few bits 'n' pieces into my existing JS ecosystem, to give my app more structure and more interactivity; rather than nuking my existing ecosystem, and making me feel like single-page JS is a whole new language.

My advice to others who are choosing a whiz-bang JS framework for the first time: don't necessarily go with the most popular or the most full-featured framework you find (although don't discard such options either); think long and hard about what your app will actually do (more on that below), and choose an appropriate framework for your use-case; and make liberal use of online resources such as reviews (I also found TodoMVC extremely useful, plus I used its well-written code samples as the foundation for my own code).

What seems to be the problem?

Nothing to see here, people.
Nothing to see here, people.
Image source: Funny Junk (originally from South Park).

Ok, so you're going to write a single-page JS app. What will your app actually do? "Single-page JS app" can mean anything; and if we're trying to find the appropriate tool for the job, then the job itself needs to be clearly defined. So, let's break it down a bit.

Is the app (mainly) read-write, or is it read-only? This is a critical question, possibly more so than anything else. One of the biggest challenges with rich JS apps, is synchronising data between client and server. If data is only flowing one day (downstream), that's a whole lot less complexity than if data is flowing upstream as well.

Turns out that JS frameworks, in general, have dedicated a lot of their feature set to supporting read-write apps. They usually do this by having "models" (the "M" in "MVC"), which are the "source of truth" on the client-side; and by "binding" these models to elements in the DOM. When the value of a DOM element changes, that triggers a model data change, which in turn (often) triggers a server-side data update. Conversely, when new data arrives from the server, the model data is updated accordingly, and that update then propagates automatically to a value in the DOM.

Even the quintessential "Todo app" example has two-way data. Turns out, however, that my app only has one-way data. My app is all about sending queries to the server (with some simple filters), and receiving metric data in response. What's more, the received data is aggregate data (ready to be rendered as charts and tables), not individual entities that can easily be stored in a model. So, turns out that my life is easier without worrying about models or event bindings at all. Receive JSON, pipe it to the chart renderer (NVD3 for most charts), end of story.

Can displayed data change dynamically within a single JS route, or can it only change when the route changes? Once again, the former entails a lot more complexity than the latter. In my app's case, each JS route (handled by Sammy.js, same as with other frameworks, as "the part of the URL after the hash character") is a single report (containing one or more graphs and tables). The report elements themselves aren't dynamic (except that hovering over various graph elements shows more info). Changing the filters of the current report, or going to a different report, involves executing a new JS route.

So, if data isn't changing dynamically within a single JS route, why bother with complex event bindings? Some simple "old-skool" jQuery event handlers may be all that's necessary.

In summary, in the case of my app, all that it really needed in a JS framework was: client-side routing (which Sammy.js provides using nice, simple callbacks); local storage (Sammy.js has a thin wrapper on top of the HTML5 local storage API); AJAX communication (Sammy.js has a thin wrapper on top of jQuery for this); and templating (out-of-the-box Sammy.js supports John Resig's JS micro-templating system). And that's already a whole lot of funky new client-side components to learn and use. Why complicate things further?

Early days

There be dragons here.
There be dragons here.
Image source: Stormy Horizon Picture.

All in all, I enjoyed building my first single-page JS app, and I'm reasonably happy with how it turned out to be architected. The front-end uses Sammy.js, D3.js/NVD3, and Bootstrap. The back-end uses Flask (Python) and MongoDB. Other than the login page and the admin pages, the app only has one non-JSON server-side route (the home page), and the rest is handled with client-side routes. The client-side is fairly simple, compared to many rich JS apps being built today; but then again, every app is unique.

I think that right now, we're still in Wild West times as far as building single-page apps goes. In particular, there are way too many frameworks in abundance; as the space matures, no doubt most of these frameworks will die off, and only a handful will thrive in the long-term. There's also a shortage of good advice about design patterns for single-page apps so far, although Mixu's book is a great foundation resource.

Single-page JS technology has plenty of advantages: it can lead to a more responsive, more beautiful app; and, when done right, its JS component can be architected just as cleanly and correctly as everything would be (traditionally) architected on the server-side. Remember, though, that it's just one piece in the puzzle, and that it only needs to be as complex as the app you're building.

]]>
Mixing GData auth with Google Discovery API queries https://greenash.net.au/thoughts/2014/08/mixing-gdata-auth-with-google-discovery-api-queries/ Mon, 11 Aug 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/08/mixing-gdata-auth-with-google-discovery-api-queries/ For those of you who have some experience working with Google's APIs, you may be aware of the fact that they fall into two categories: the Google Data APIs, which is mainly for older services; and the discovery-based APIs, which is mainly for newer services.

There has been considerable confusion regarding the difference between the two APIs. I'm no expert, and I admit that I too have fallen victim to the confusion at times. Both systems now require the use of OAuth2 for authentication (it's no longer possible to access any Google APIs without Oauth2). However, each of Google's APIs only falls into one of the two camps; and once authentication is complete, you must use the correct library (either GData or Discovery, for your chosen programming language) in order to actually perform API requests. So, all that really matters, is that for each API that you plan to use, you're crystal clear on which type of API it is, and you use the correct corresponding library.

The GData Python library has a very handy mechanism for exporting an authorised access token as a blob (i.e. a serialised string), and for later re-importing the blob back as a programmatic access token. I made extensive use of this when I recently worked with the Google Analytics API, which is GData-based. I couldn't find any similar functionality in the Discovery API Python library; and I wanted to interact similarly with the YouTube Data API, which is discovery-based. What to do?

Mix 'n' match

The GData API already supports converting a Credentials object to an OAuth2 token object. This is great for an app that has user-facing OAuth2, where a Credentials object is available at the time of making API requests. However, in my situation – making API requests in a server-side script, that runs via cron with no user-facing OAuth2 – that's not much use. I have the opposite problem: I can easily get the token object, but I don't have any Credentials object already instantiated.

Well, it turns out that manually instantiating your own Credentials object isn't that hard. So, this is how I go about querying the YouTube Data API:

import httplib2

import gdata.gauth
from apiclient.discovery import build
from oauth2client.client import OAuth2Credentials

from mysettings import token_blob_string, \
                       youtube_playlist_id, \
                       page_size, \
                       next_page_token

# De-serialise the access token that can be conveniently stored in a
# Python settings file elsewhere, as a blob (string).
# GData provides the blob functionality, but the Discovery API library
# doesn't.
token = gdata.gauth.token_from_blob(token_blob_string)

# Manually instantiate an OAuth2Credentials object from the
# de-serialised access token.
credentials = OAuth2Credentials(
    access_token=token.access_token,
    client_id=token.client_id,
    client_secret=token.client_secret,
    refresh_token=token.refresh_token,
    token_expiry=None,
    token_uri=token.token_uri,
    user_agent=None)

http = credentials.authorize(httplib2.Http())
youtube = build('youtube', 'v3', http=http)

# Profit!
response = youtube.playlistItems().list(
    playlistId=youtube_playlist_id,
    part="snippet",
    maxResults=page_size,
    pageToken=next_page_token
).execute()

Easy win

And there you go: you can have your cake and eat it, too! All you need is an OAuth2 access token that you've already saved elsewhere as a blob string; and with that, you can query discovery-based Google APIs from anywhere you want, at any time, with no additional OAuth2 hoops to jump through.

If you want more details on how to serialise and de-serialise access token blobs using the GData Python library, others have explained it step-by-step, I'm not going to repeat all of that here. I hope this makes life a bit easier, for anyone else who's trying to deal with "offline" long-lived access tokens and the discovery-based Google APIs.

]]>
Australian LGA to postcode mappings with PostGIS and Intersects https://greenash.net.au/thoughts/2014/07/australian-lga-to-postcode-mappings-with-postgis-and-intersects/ Sat, 12 Jul 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/07/australian-lga-to-postcode-mappings-with-postgis-and-intersects/ For a recent project, I needed to know the LGAs (Local Government Areas) of all postcodes in Australia, and vice versa. As it turns out, there is no definitive Australia-wide list containing this data anywhere. People have been discussing the issue for some time, with no clear outcome. So, I decided to get creative.

To cut a long story short: I've produced my own list! You can download my Australian LGA postcode mappings spreadsheet from Google Docs.

If you want the full story: I imported both the LGA boundaries data and the Postal Area boundaries data from the ABS, into PostGIS, and I did an "Intersects" query on the two datasets. I exported the results of this query to CSV. Done! And all perfectly reproducible, using freely available public data sets, and using free and open-source software tools.

The process

I started by downloading the Geo data that I needed, from the ABS. My source was the page Australian Statistical Geography Standard (ASGS): Volume 3 - Non ABS Structures, July 2011. This was the most recent page that I could find on the ABS, containing all the data that I needed. I downloaded the files "Local Government Areas ASGS Non ABS Structures Ed 2011 Digital Boundaries in MapInfo Interchange Format", and "Postal Areas ASGS Non ABS Structures Ed 2011 Digital Boundaries in MapInfo Interchange Format".

Big disclaimer: I'm not an expert at anything GIS- or spatial-related, I'm a complete n00b at this. I decided to download the data I needed in MapInfo format. It's also available on the ABS web site in ArcGIS Shapefile format. I could have downloaded the Shapefiles instead – they can also be imported into PostGIS, using the same tools that I used. I chose the MapInfo files because I did some quick Googling around, and I got the impression that MapInfo files are less complex and are somewhat more portable. I may have made the wrong choice. Feel free to debate the merits of MapInfo vs ArcGIS files for this task, and to try this out yourself using ArcGIS instead of MapInfo. I'd be interested to see the difference in results (theoretically there should be no difference… in practice, who wants to bet there is?).

I then had to install PostGIS (I already had Postgres installed) and related tools on my local machine (running Ubuntu 12.04). I'm not providing PostGIS installation instructions here, there's plenty of information available elsewhere to help you get set up with all the tools you need, for your specific OS / requirements. Installing PostGIS and related tools can get complicated, so if you do decide to try all this yourself, don't say I didn't warn you. Ubuntu is probably one of the easier platforms on which to install it, but there are plenty of guides out there for Windows and Mac too.

Once I was all set up, I imported the data files into a PostGIS-enabled Postgres database with these commands:

ogr2ogr -a_srs EPSG:4283 -f "PostgreSQL" \
PG:"host=localhost user=lgapost dbname=lgapost password=PASSWORD" \
-lco OVERWRITE=yes -nln lga LGA_2011_AUST.mid

ogr2ogr -a_srs EPSG:4283 -f "PostgreSQL" \
PG:"host=localhost user=lgapost dbname=lgapost password=PASSWORD" \
-lco OVERWRITE=yes -nln postcodes POA_2011_AUST.mid

If you're interested in the OGR Toolkit (ogr2ogr and friends), there are plenty of resources available; in particular, this OGR Toolkit guide was very useful for me.

After playing around with a few different map projections, I decided that EPSG:4283 was probably the correct one to use as an argument to ogr2ogr. I based my decision on seeing the MapInfo projection string "CoordSys Earth Projection 1, 116" in the header of the ABS data files, and then finding this list of common Australian-used map projections. Once again: I am a total n00b at this. I know very little about map projections (except that it's a big and complex topic). Feel free to let me know if I've used completely the wrong projection for this task.

I renamed the imported tables to 'lga' and 'postcodes' respectively, and I then ran this from the psql shell, to find all LGAs that intersect with all postal areas, and to export the result to a CSV:

\copy (SELECT     l.state_name_2011,
                  l.lga_name_2011,
                  p.poa_code_2011
       FROM       lga l
       INNER JOIN postcodes p
       ON         ST_Intersects(
                      l.wkb_geometry,
                      p.wkb_geometry)
       ORDER BY   l.state_name_2011,
                  l.lga_name_2011,
                  p.poa_code_2011)
TO '/path/to/lga_postcodes.csv' WITH CSV HEADER;

Final remarks

That's about it! Also, some notes of mine (mainly based on the trusty Wikipedia page Local Government in Australia):

  • There's no data for the ACT, since the ACT has no LGAs
  • Almost the entire Brisbane and Gold Coast metro areas, respectively, are one LGA
  • Some areas of Australia aren't part of any LGA (although they're all remote areas with very small populations)
  • Quite a large number of valid Australian postcodes are not part of any LGA (because they're for PO boxes, for bulk mail handlers, etc, and they don't cover a geographical area as such, in the way that "normal" postcodes do)

I hope that this information is of use, to anyone else who needs to link up LGAs and postcodes in a database or in a GIS project.

]]>
Database-free content tagging with files and glob https://greenash.net.au/thoughts/2014/05/database-free-content-tagging-with-files-and-glob/ Tue, 20 May 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/05/database-free-content-tagging-with-files-and-glob/ Tagging data (e.g. in a blog) is many-to-many data. Each content item can have multiple tags. And each tag can be assigned to multiple content items. Many-to-many data needs to be stored in a database. Preferably a relational database (e.g. MySQL, PostgreSQL), otherwise an alternative data store (e.g. something document-oriented like MongoDB / CouchDB). Right?

If you're not insane, then yes, that's right! However, for a recent little personal project of mine, I decided to go nuts and experiment. Check it out, this is my "mapping data" store:

Just a list of files in a directory.
Just a list of files in a directory.

And check it out, this is me querying the data store:

Show me all posts with the tag 'fun-stuff'.
Show me all posts with the tag 'fun-stuff'.

And again:

Show me all tags for the post 'rant-about-seashells'.
Show me all tags for the post 'rant-about-seashells'.

And that's all there is to it. Many-to-many tagging data stored in a list of files, with content item identifiers and tag identifiers embedded in each filename. Querying is by simple directory listing shell commands with wildcards (also known as "globbing").

Is it user-friendly to add new content? No! Does it allow the rich querying of SQL and friends? No! Is it scalable? No!

But… Is the basic querying it allows enough for my needs? Yes! Is it fast (for a store of up to several thousand records)? Yes! And do I have the luxury of not caring about user-friendliness or scalability in this instance? Yes!

Implementation

For the project in which I developed this system, I implemented the querying with some simple PHP code. For example, this is my "content item" store:

Another list of files in a directory.
Another list of files in a directory.

These are the functions to do some basic querying on all content:

<?php
/**
 * Queries for all blog pages.
 *
 * @return
 *   List of all blog pages.
 */
function blog_query_all() {
  $files = glob(BASE_FILE_PATH . 'pages/blog/*.php');
  if (!empty($files)) {
    foreach (array_keys($files) as $k) {
      $files[$k] = str_replace(BASE_FILE_PATH . 'pages/blog/',
                               '',
                               $files[$k]);
    }
    rsort($files);
  }

  return $files;
}

/**
 * Queries for blog pages with the specified year / month.
 *
 * @param $year
 *   Year.
 * @param $month
 *   Month
 *
 * @return
 *   List of blog pages with the specified year / month.
 */
function blog_query_byyearmonth($year, $month) {
  $files = glob(BASE_FILE_PATH . 'pages/blog/' .
                $year . '-' . $month . '-*.php');
  if (!empty($files)) {
    foreach (array_keys($files) as $k) {
      $files[$k] = str_replace(BASE_FILE_PATH . 'pages/blog/',
                               '',
                               $files[$k]);
    }
  }

  return $files;
}

/**
 * Gets the previous blog page (by date).
 *
 * @param $full_identifier
 *   Full identifier of current blog page.
 *
 * @return
 *   Full identifier of previous blog page.
 */
function blog_get_prev($full_identifier) {
  $files = blog_query_all();
  $curr_index = array_search($full_identifier . '.php', $files);

  if ($curr_index !== FALSE && $curr_index < count($files)-1) {
    return str_replace('.php', '', $files[$curr_index+1]);
  }

  return NULL;
}

/**
 * Gets the next blog page (by date).
 *
 * @param $full_identifier
 *   Full identifier of current blog page.
 *
 * @return
 *   Full identifier of next blog page.
 */
function blog_get_next($full_identifier) {
  $files = blog_query_all();
  $curr_index = array_search($full_identifier . '.php', $files);

  if ($curr_index !== FALSE && $curr_index !== 0) {
    return str_replace('.php', '', $files[$curr_index-1]);
  }

  return NULL;
}

And these are the functions to query content by tag:

<?php
/**
 * Queries for blog pages with the specified tag.
 *
 * @param $slug
 *   Tag slug.
 *
 * @return
 *   List of blog pages with the specified tag.
 */
function blog_query_bytag($slug) {
  $files = glob(BASE_FILE_PATH .
                'mappings/blog_tags/*--' . $slug . '.php');
  if (!empty($files)) {
    foreach (array_keys($files) as $k) {
      $files[$k] = str_replace(BASE_FILE_PATH . 'mappings/blog_tags/',
                               '',
                               $files[$k]);
    }
    rsort($files);
  }

  return $files;
}

/**
 * Gets a blog page's tags based on its full identifier.
 *
 * @param $full_identifier
 *   Blog page's full identifier.
 *
 * @return
 *   Tags.
 */
function blog_get_tags($full_identifier) {
  $files = glob(BASE_FILE_PATH .
                'mappings/blog_tags/' . $full_identifier . '*.php');
  $ret = array();

  if (!empty($files)) {
    foreach ($files as $f) {
      $ret[] = str_replace(BASE_FILE_PATH . 'mappings/blog_tags/' .
                             $full_identifier . '--',
                           '',
                           str_replace('.php', '', $f));
    }
  }

  return $ret;
}

That's basically all the "querying" that this blog app needs.

In summary

What I've shared here, is part of the solution that I recently built when I migrated Jaza's World Trip (my travel blog from 2007-2008) away from (an out-dated version of) Drupal, and into a new database-free custom PHP thingamajig. (I'm considering writing a separate article about what else I developed, and I'm also considering cleaning it up and releasing it as a biolerplate PHP project template on GitHub… although not sure if it's worth the effort, we shall see).

This is an old blog site that I wanted to "retire", i.e. to migrate off a CMS platform, and into more-or-less static files. So, the filesystem-based data store that I developed in this case was a good match, because:

  • No new content will be added to the site in the future
  • Migrating the site to a different server (in the hypothetical future) would consist of simply copying all the files, and the new server would only need to support PHP (and PHP is the most commonly-supported web server technology in the world)
  • If the data store performs well with the current volume of content, that's great; I don't care if it doesn't scale to millions of records (due to e.g. files-per-directory OS limits being reached, glob performance worsening), because it will never have that many

Most sites that I develop are new, and they don't fit this use case at all. They need a content management admin interface. They need to scale. And they usually need various other features (e.g. user login) that also commonly rely on a traditional database backend. However, for this somewhat unique use-case, building a database-free tagging data store was a fun experiment!

]]>
Sharing templates between multiple Drupal views https://greenash.net.au/thoughts/2014/04/sharing-templates-between-multiple-drupal-views/ Thu, 24 Apr 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/04/sharing-templates-between-multiple-drupal-views/ Do you have multiple views on your Drupal site, where the content listing is themed to look exactly the same? For example, say you have a custom "search this site" view, a "featured articles" view, and an "articles archive" view. They all show the same fields — for example, "title", "image", and "summary". They all show the same content types – except that the first one shows "news" or "page" content, whereas the others only show "news".

If your design is sufficiently custom that you're writing theme-level Views template files, then chances are that you'll be in danger of creating duplicate templates. I've committed this sin on numerous sites over the past few years. On many occasions, my Views templates were 100% identical, and after making a change in one template, I literally copy-pasted and renamed the file, to update the other templates.

Until, finally, I decided that enough is enough – time to get DRY!

Being less repetitive with your Views templates is actually dead simple. Let's say you have three identical files – views-view-fields--search_this_site.tpl.php, views-view-fields--featured_articles.tpl.php, and views-view-fields--articles_archive.tpl.php. Here's how you clean up your act:

  1. Delete the latter two files.
  2. Add this to your theme's template.php file:
    <?php
    function mytheme_preprocess_views_view_fields(&$vars) {
      if (in_array(
        $vars['view']->name, array(
          'search_this_site',
          'featured_articles',
          'articles_archive'))) {
        $vars['theme_hook_suggestions'][] =
          'views_view_fields__search_this_site';
      }
    }
    

  3. Clear your cache (that being the customary final step when doing anything in Drupal, of course).

I've found that views-view-fields.tpl.php-based files are the biggest culprits for duplication; but you might have some other Views templates in need of cleaning up, too, such as:

<?php
function mytheme_preprocess_views_view(&$vars) {
  if (in_array(
    $vars['view']->name, array(
      'search_this_site',
      'featured_articles',
      'articles_archive'))) {
    $vars['theme_hook_suggestions'][] =
      'views_view__search_this_site';
  }
}

And, if your views include a search / filtering form, perhaps also:

<?php
function mytheme_preprocess_views_exposed_form(&$vars) {
  if (in_array(
    $vars['view']->name, array(
      'search_this_site',
      'featured_articles',
      'articles_archive'))) {
    $vars['theme_hook_suggestions'][] =
      'views_exposed_form__search_this_site';
  }
}

That's it – just a quick tip from me for today. You can find out more about this technique on the Custom Theme Hook Suggestions documentation page, although I couldn't find an example for Views there, nor anywhere else online for that matter; hence this article. Hopefully this results in a few kilobytes saved, and (more importantly) a lot of unnecessary copy-pasting of template files saved, for fellow Drupal devs and themers.

]]>
The cost of building a "perfect" custom Drupal installation profile https://greenash.net.au/thoughts/2014/04/the-cost-of-building-a-perfect-custom-drupal-installation-profile/ Wed, 16 Apr 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/04/the-cost-of-building-a-perfect-custom-drupal-installation-profile/ With virtually everything in Drupal, there are two ways to accomplish a task: The Easy Way, or The Right™ Way.

Deploying a new Drupal site for the first time is no exception. The Easy Way – and almost certainly the most common way – is to simply copy your local version of the database to production (or staging), along with user-uploaded files. (Your code needs to be deployed too, and The Right™ Way to deploy it is with version-control, which you're hopefully using… but that's another story.)

The Right™ Way to deploy a Drupal site for the first time (at least since Drupal 7, and "with hurdles" since Drupal 6), is to only deploy your code, and to reproduce your database (and ideally also user-uploaded files) with a custom installation profile, and also with significant help from the Features module.

The Right Way can be a deep rabbit hole, though.
The Right Way can be a deep rabbit hole, though.
Image source: SIX Nutrition.

I've been churning out quite a lot of Drupal sites over the past few years, and I must admit, the vast majority of them were deployed The Easy Way. Small sites, single developer, quick turn-around. That's usually the way it rolls. However, I've done some work that's required custom installation profiles, and I've also been trying to embrace Features more; and so, for my most recent project – despite it being "yet another small-scale, one-dev site" – I decided to go the full hog, and to build it 100% The Right™ Way, just for kicks. In order to force myself to do things properly, I re-installed my dev site from scratch (and thus deleted my dev database) several times a day; i.e. I continuously tested my custom installation profile during dev.

Does it give me a warm fuzzy feeling, as a dev, to be able to install a perfect copy of a new site from scratch? Hell yeah. But does that warm fuzzy feeling come at a cost? Hell yeah.

What's involved

For our purposes, the contents of a typical Drupal database can be broken down into three components:

  1. Critical configuration
  2. Secondary configuration
  3. Content

Critical configuration is: (a) stuff that should be set immediately upon site install, because important aspects of the site depend on it; and (b) stuff that cannot or should not be managed by Features. When building a custom installation profile, all critical configuration should be set with custom code that lives inside the profile itself, either in its hook_install() implementation, or in one of its hook_install_tasks() callbacks. The config in this category generally includes: the default theme and its config; the region/theme for key blocks; user roles, basic user permissions, and user variables; date formats; and text formats. This config isn't all that hard to write (see Drupal core's built-in installation profiles for good example code), and it shouldn't need much updating during dev.

Secondary configuration is: (a) stuff that can be set after the main install process has finished; and (b) stuff that's managed by Features. These days, thanks to various helpers such as Strongarm and Features Extra, there isn't much that can't be exported and managed in this way. All secondary configuration should be set in exportable definitions in Features-generated modules, which need to be added as dependencies in the installation profile's .info file. On my recent project, this included: many variables; content types; fields; blocks (including Block Class classes and block content); views; vocabularies; image styles; nodequeues; WYSIWYG profiles; and CER presets.

Secondary config isn't hard to write – in fact, it writes itself! However, it is a serious pain to maintain. Every time that you add or modify any piece of secondary content on your dev site, you need to perform the following workflow:

  1. Does an appropriate feature module already exist for this config? If not, create a new feature module, export it to your site's codebase, and add the module as a dependency to the installation profile's .info file.
  2. Is this config new? If so, manually add it to the relevant feature.
  3. For all new or updated config: re-create the relevant feature module, thus re-exporting the config.

I found that I got in the habit of checking my site's Features admin page, before committing whatever code I was about to commit. I re-exported all features that were flagged with changes, and I tried to remember if there was any new config that needed to be added to a feature, before going ahead and making the commit. Because I decided to re-install my dev site from scratch regularly, and to scrap my local database, I had no choice but to take this seriously: if there was any config that I forgot to export, it simply got lost in the next re-install.

Content is stuff that is not config. Content depends on all critical and secondary config being set. And content is not managed by Features: it's managed by users, once the site is deployed. (Content can now be managed by Features, using the UUID module – but I haven't tried that approach, and I'm not particularly convinced that it's The Right™ Way.) On my recent project, content included: nodes (of course); taxonomy terms; menu items; and nodequeue mappings.

An important part of handing over a presentable site to the client, in my experience, is that there's at least some demo / structural content in place. So, in order to handle content in my "continuously installable" setup, I wrote a bunch of custom Drush commands, which defined all the content in raw PHP using arrays / objects, and which imported all the content using Drupal's standard API functions (i.e. node_save() and friends). This also included user-uploaded files (i.e. images and documents): I dumped all these into a directory outside of my Drupal root, and imported them using the Field API and some raw file-copying snippets.

All rosy?

The upside of it all: I lived the dream on this project. I freed myself from database state. Everything I'd built was safe and secure within the code repo, and the only thing that needed to be deployed to staging / production was the code itself.

Join me, comrades! Join me and all Drupal sites will be equal! (But some more equal than others).
Join me, comrades! Join me and all Drupal sites will be equal! (But some more equal than others).

(Re-)installing the site consisted of little more than running (something similar to) these Drush commands:

drush cc all
drush site-install --yes mycustomprofile --account-mail=info@blaaaaaaaa.com --account-name=admin --account-pass=blaaaaaaa
drush features-revert-all --yes
drush mymodule-install-content

The downside of it: constantly maintaining exported features and content-in-code eats up a lot of time. As a rough estimate, I'd say that it resulted in me spending about 30% more time on the project than I would have otherwise. Fortunately, the project was still delivered ahead of schedule and under budget; had constraints been tighter, I probably couldn't have afforded the luxury of this experiment.

Unfortunately, Drupal just isn't designed to store either configuration or content in code. Doing either is an uphill battle. Maintaining all config and content in code was virtually impossible in Drupal 5 and earlier; it had numerous hurdles in Drupal 6; and it's possible (and recommended) but tedious in Drupal 7. Drupal 8 – despite the enormous strides forward that it's making with the Configuration Management Initiative (CMI) – will still, at the end of the day, treat the database rather than code as the "source of truth" for config. Therefore, I assert that, although it will be easier than ever to manage all config in code, the "configuration management" and "continuous deployment" problems still won't be completely solved in Drupal 8.

I've been working increasingly with Django over the past few years, where configuration only exists in code (in Python settings, in model classes, in view callables, etc), and where only content exists in the database (and where content has also been easily exportable / deployable using fixtures, since before Drupal "exportables" were invented); and in that world, these are problems that simply don't exist. There's no need to ever synchronise between the "database version" of config and the "code version" of config. Unfortunately, Drupal will probably never reach this Zen-like ideal, because it seems unlikely that Drupal will ever let go of the database as a config store altogether.

Anyway, despite the fact that a "perfect" installation profile probably isn't justifiable for most smaller Drupal projects, I think that it's still worthwhile, in the same way that writing proper update scripts is still worthwhile: i.e. because it significantly improves quality; and because it's an excellent learning tool for you as a developer.

]]>
Using PayPal WPS with Cartridge (Mezzanine / Django) https://greenash.net.au/thoughts/2014/03/using-paypal-wps-with-cartridge-mezzanine-django/ Mon, 31 Mar 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/03/using-paypal-wps-with-cartridge-mezzanine-django/ I recently built a web site using Mezzanine, a CMS built on top of Django. I decided to go with Mezzanine (which I've never used before) for two reasons: it nicely enhances Django's admin experience (plus it enhances, but doesn't get in the way of, the Django developer experience); and there's a shopping cart app called Cartridge that's built on top of Mezzanine, and for this particular site (a children's art class business in Sydney) I needed shopping cart / e-commerce functionality.

This suite turned out to deliver virtually everything I needed out-of-the-box, with one exception: Cartridge currently lacks support for payment methods that require redirecting to the payment gateway and then returning after payment completion (such as PayPal Website Payments Standard, or WPS). It only supports payment methods where payment is completed on-site (such as PayPal Website Payments Pro, or WPP). In this case, with the project being small and low-budget, I wanted to avoid the overhead of dealing with SSL and on-site payment, so PayPal WPS was the obvious candidate.

Turns out that, with a bit of hackery, making Cartridge play nice with WPS isn't too hard to achieve. Here's how you go about it.

Install dependencies

Note / disclaimer: this section is mostly copied from my Django Facebook user integration with whitelisting article from over two years ago, because the basic dependencies are quite similar.

I'm assuming that you've already got an environment set up, that's equipped for Django development. I.e. you've already installed Python (my examples here are tested on Python 2.7), a database engine (preferably SQLite on your local environment), pip (recommended), and virtualenv (recommended). If you want to implement these examples fully, then as well as a dev environment with these basics set up, you'll also need a server to which you can deploy a Django site, and on which you can set up a proper public domain or subdomain DNS (because the PayPal API won't actually talk to your localhost, it refuses to do that).

You'll also need a PayPal (regular and "sandbox") account, which you will use for authenticating with the PayPal API.

Here are the basic dependencies for the project. I've copy-pasted this straight out of my requirements.txt file, which I install on a virtualenv using pip install -E . -r requirements.txt (I recommend you do the same):

Django==1.6.2
Mezzanine==3.0.9
South==0.8.4
Cartridge==0.9.2
cartridge-payments==0.97.0
-e git+https://github.com/dcramer/django-paypal.git@4d582243#egg=django_paypal
django-uuidfield==0.5.0

Note: for dcramer/django-paypal, which has no versioned releases, I'm using the latest git commit as of writing this. I recommend that you check for a newer commit and update your requirements accordingly. For the other dependencies, you should also be able to update version numbers to latest stable releases without issues (although Mezzanine 3.0.x / Cartridge 0.9.x is only compatible with Django 1.6.x, not Django 1.7.x which is still in beta as of writing this).

Once you've got those dependencies installed, make sure this Mezzanine-specific setting is in your settings.py file:

# If True, the south application will be automatically added to the
# INSTALLED_APPS setting.
USE_SOUTH = True

Then, let's get a new project set up per Mezzanine's standard install:

mezzanine-project myproject
cd myproject
python manage.py createdb
python manage.py migrate --all

(When it asks "Would you like to install an initial demo product and sale?", I've gone with "yes" for my test / demo project; feel free to do the same, if you'd like some products available out-of-the-box with which to test checkout / payment).

This will get the Mezzanine foundations installed for you. The basic configuration of the Django / Mezzanine settings file, I leave up to you. If you have some experience already with Django (and if you've got this far, then I assume that you do), you no doubt have a standard settings template already in your toolkit (or at least a standard set of settings tweaks), so feel free to use it. I'll be going over the settings you'll need specifically for this app, in just a moment.

Fire up ye 'ol runserver, open your browser at http://localhost:8000/, and confirm that the "Congratulations!" default Mezzanine home page appears for you. Also confirm that you can access the admin. And that's the basics set up!

Basic Django / Mezzanine / Cartridge site: default look after install.
Basic Django / Mezzanine / Cartridge site: default look after install.

At this point, you should also be able to test out adding an item to your cart and going to checkout. After entering some billing / delivery details, on the 'payment details' screen it should ask for credit card details. This is the default Cartridge payment setup: we'll be switching this over to PayPal shortly.

Configure Django settings

I'm not too fussed about what else you have in your Django settings file (or in how your Django settings are structured or loaded, for that matter); but if you want to follow along, then you should have certain settings configured per the following guidelines (note: much of these instructions are virtually the same as the cartridge-payments install instructions):

  • Your TEMPLATE_CONTEXT_PROCESSORS is to include (as well as 'mezzanine.conf.context_processors.settings'):
    [
        'payments.multipayments.context_processors.settings',
    ]

    (See the TEMPLATE_CONTEXT_PROCESSORS documentation for the default value of this setting, to paste into your settings file).

  • Re-configure the SHOP_CHECKOUT_FORM_CLASS setting to this:
    SHOP_CHECKOUT_FORM_CLASS = 'payments.multipayments.forms.base.CallbackUUIDOrderForm'
  • Disable the PRIMARY_PAYMENT_PROCESSOR_IN_USE setting:
    PRIMARY_PAYMENT_PROCESSOR_IN_USE = False
  • Configure the SECONDARY_PAYMENT_PROCESSORS setting to this:
    SECONDARY_PAYMENT_PROCESSORS = (
        ('paypal', {
            'name' : 'Pay With Pay-Pal',
            'form' : 'payments.multipayments.forms.paypal.PaypalSubmissionForm'
        }),
    )
  • Set a value for the PAYPAL_CURRENCY setting, for example:
    # Currency type.
    PAYPAL_CURRENCY = "AUD"
  • Set a value for the PAYPAL_BUSINESS setting, for example:
    # Business account email. Sandbox emails look like this.
    PAYPAL_BUSINESS = 'cartwpstest@blablablaaaaaaa.com'
  • Set a value for the PAYPAL_RECEIVER_EMAIL setting, for example:
    PAYPAL_RECEIVER_EMAIL = PAYPAL_BUSINESS
  • Set a value for the PAYPAL_RETURN_WITH_HTTPS setting, for example:
    # Use this to enable https on return URLs.  This is strongly recommended! (Except for sandbox)
    PAYPAL_RETURN_WITH_HTTPS = False
  • Configure the PAYPAL_RETURN_URL setting to this:
    # Function that returns args for `reverse`.
    # URL is sent to PayPal as the for returning to a 'complete' landing page.
    PAYPAL_RETURN_URL = lambda cart, uuid, order_form: ('shop_complete', None, None)
  • Configure the PAYPAL_IPN_URL setting to this:
    # Function that returns args for `reverse`.
    # URL is sent to PayPal as the URL to callback to for PayPal IPN.
    # Set to None if you do not wish to use IPN.
    PAYPAL_IPN_URL = lambda cart, uuid, order_form: ('paypal.standard.ipn.views.ipn', None, {})
  • Configure the PAYPAL_SUBMIT_URL setting to this:
    # URL the secondary-payment-form is submitted to
    # For real use set to 'https://www.paypal.com/cgi-bin/webscr'
    PAYPAL_SUBMIT_URL = 'https://www.sandbox.paypal.com/cgi-bin/webscr'
  • Configure the PAYPAL_TEST setting to this:
    # For real use set to False
    PAYPAL_TEST = True
  • Configure the EXTRA_MODEL_FIELDS setting to this:
    EXTRA_MODEL_FIELDS = (
        (
            "cartridge.shop.models.Order.callback_uuid",
            "django.db.models.CharField",
            (),
            {"blank" : False, "max_length" : 36, "default": ""},
        ),
    )

    After doing this, you'll probably need to manually create a migration in order to get this field added to your database (per Mezzanine's field injection caveat docs), and you'll then need to apply that migration (in this example, I'm adding the migration to an app called 'content' in my project):

    mkdir /projectpath/content/migrations
    touch /projectpath/content/migrations/__init__.py
    python manage.py schemamigration cartridge.shop --auto --stdout > /projectpath/content/migrations/0001_cartridge_shop_add_callback_uuid.py

    python manage.py migrate --all

  • Your INSTALLED_APPS is to include (as well as the basic 'mezzanine.*' apps, and 'cartridge.shop'):
    [
        'payments.multipayments',
        'paypal.standard.ipn',
    ]

    (You'll need to re-run python manage.py migrate --all after enabling these apps).

Implement PayPal payment

Here's how you do it:

  • Add this to your urlpatterns variable in your urls.py file (replace the part after paypal-ipn- with a random string of your choice):
    [
        (r'^paypal-ipn-8c5erc9ye49ia51rn655mi4xs7/', include('paypal.standard.ipn.urls')),
    ]
  • Although it shouldn't be necessary, I've found that I need to copy the templates provided by explodes/cartridge-payments into my project's templates directory, otherwise they're ignored and Cartridge's default payment template still gets used:

    cp -R /projectpath/lib/python2.7/site-packages/payments/multipayments/templates/shop /projectpath/templates/

  • Place the following code somewhere in your codebase (per the django-paypal docs, I placed it in the models.py file for one of my apps):
    # ...
    
    from importlib import import_module
    
    from mezzanine.conf import settings
    
    from cartridge.shop.models import Cart, Order, ProductVariation, \
    DiscountCode
    from paypal.standard.ipn.signals import payment_was_successful
    
    # ...
    
    
    def payment_complete(sender, **kwargs):
        """Performs the same logic as the code in
        cartridge.shop.models.Order.complete(), but fetches the session,
        order, and cart objects from storage, rather than relying on the
        request object being passed in (which it isn't, since this is
        triggered on PayPal IPN callback)."""
    
        ipn_obj = sender
    
        if ipn_obj.custom and ipn_obj.invoice:
            s_key, cart_pk = ipn_obj.custom.split(',')
            SessionStore = import_module(settings.SESSION_ENGINE) \
                               .SessionStore
            session = SessionStore(s_key)
    
            try:
                cart = Cart.objects.get(id=cart_pk)
                try:
                    order = Order.objects.get(
                        transaction_id=ipn_obj.invoice)
                    for field in order.session_fields:
                        if field in session:
                            del session[field]
                    try:
                        del session["order"]
                    except KeyError:
                        pass
    
                    # Since we're manually changing session data outside of
                    # a normal request, need to force the session object to
                    # save after modifying its data.
                    session.save()
    
                    for item in cart:
                        try:
                            variation = ProductVariation.objects.get(
                                sku=item.sku)
                        except ProductVariation.DoesNotExist:
                            pass
                        else:
                            variation.update_stock(item.quantity * -1)
                            variation.product.actions.purchased()
    
                    code = session.get('discount_code')
                    if code:
                        DiscountCode.objects.active().filter(code=code) \
                            .update(uses_remaining=F('uses_remaining') - 1)
                    cart.delete()
                except Order.DoesNotExist:
                    pass
            except Cart.DoesNotExist:
                pass
    
    payment_was_successful.connect(payment_complete)
    

    This little snippet that I whipped up, is the critical spoonful of glue that gets PayPal WPS playing nice with Cartridge. Basically, when a successful payment is realised, PayPal WPS doesn't force the user to redirect back to the original web site, and therefore it doesn't rely on any redirection in order to notify the site of success. Instead, it uses PayPal's IPN (Instant Payment Notification) system to make a separate, asynchronous request to the original web site – and it's up to the site to receive this request and to process it as it sees fit.

    This code uses the payment_was_successful signal that django-paypal provides (and that it triggers on IPN request), to do what Cartridge usually takes care of (for other payment methods), on success: i.e. it clears the user's shopping cart; it updates remaining quantities of products in stock (if applicable); it triggers Cartridge's "product purchased" actions (e.g. email an invoice / receipt); and it updates a discount code (if applicable).

  • Apply a hack to cartridge-payments (file lib/python2.7/site-packages/payments/multipayments/forms/paypal.py) per this diff:

    After line 25 (charset = forms.CharField(widget=forms.HiddenInput(), initial='utf-8')), add this:

        custom = forms.CharField(required=False, widget=forms.HiddenInput())

    After line 49 ((tax_price if tax_price else const.Decimal('0'))), add this:

            try:
                s_key = request.session.session_key
            except:
                # for Django 1.4 and above
                s_key = request.session._session_key

    After line 70 (self.fields['business'].initial = settings.PAYPAL_BUSINESS), add this:

    self.fields['custom'].initial = ','.join([s_key, str(request.cart.pk)])
  • Apply a hack to django-paypal (file src/django-paypal/paypal/standard/forms.py) per these instructions:

    After line 15 ("%H:%M:%S %b. %d, %Y PDT",), add this:

                          "%H:%M:%S %d %b %Y PST",    # note this
                          "%H:%M:%S %d %b %Y PDT",    # and that

That should be all you need, in order to get checkout with PayPal WPS working on your site. So, deploy everything that's been done so far to your online server, log in to the Django admin, and for some of the variations for the sample product in the database, add values for "number in stock".

Then, log out of the admin, and navigate to the "shop" section of the site. Try out adding an item to your cart.

Basic Django / Mezzanine / Cartridge site: adding an item to shopping cart.
Basic Django / Mezzanine / Cartridge site: adding an item to shopping cart.

Once on the "your cart" page, continue by clicking "go to checkout". On the "billing details" page, enter sample billing information as necessary, then click "next". On the "payment" page, you should see a single button labelled "pay with pay-pal".

Basic Django / Mezzanine / Cartridge site: 'go to pay-pal' button.
Basic Django / Mezzanine / Cartridge site: 'go to pay-pal' button.

Click the button, and you should be taken to the PayPal (sandbox, unless configured otherwise) payment landing page. For test cases, log in with a PayPal test account, and click 'Pay Now' to try out the process.

Basic Django / Mezzanine / Cartridge site: PayPal payment screen.
Basic Django / Mezzanine / Cartridge site: PayPal payment screen.

If payment is successful, you should see the PayPal confirmation page, saying "thanks for your order". Click the link labelled "return to email@here.com" to return to the Django site. You should see Cartridge's "order complete" page.

Basic Django / Mezzanine / Cartridge site: order complete screen.
Basic Django / Mezzanine / Cartridge site: order complete screen.

And that's it, you're done! You should be able to verify that the IPN callback was triggered, by checking that the "number in stock" has decreased to reflect the item that was just purchased, and by confirming that an order email / confirmation email was received.

Finished process

I hope that this guide is of assistance, to anyone else who's looking to integrate PayPal WPS with Cartridge. The difficulties associated with it are also documented in this mailing list thread (to which I posted a rough version of what I've illustrated in this article). Feel free to leave comments here, and/or in that thread.

Hopefully the hacks necessary to get this working at the moment, will no longer be necessary in the future; it's up to the maintainers of the various projects to get the fixes for these committed. Ideally, the custom signal implementation won't be necessary either in the future: it would be great if Cartridge could work out-of-the-box with PayPal WPS. Unfortunately, the current architecture of Cartridge's payment system simply isn't designed for something like IPN, it only plays nicely with payment methods that keep the user on the Django site the entire time. In the meantime, with the help of this article, you should at least be able to get it working, even if more custom code is needed than what would be ideal.

]]>
Protect the children, but don't blindfold them https://greenash.net.au/thoughts/2014/03/protect-the-children-but-dont-blindfold-them/ Tue, 18 Mar 2014 00:00:00 +0000 https://greenash.net.au/thoughts/2014/03/protect-the-children-but-dont-blindfold-them/ Being a member of mainstream society isn't for everyone. Some want out.

Societal vices have always been bountiful. Back in the ol' days, it was just the usual suspects. War. Violence. Greed. Corruption. Injustice. Propaganda. Lewdness. Alcoholism. To name a few. In today's world, still more scourges have joined in the mix. Consumerism. Drug abuse. Environmental damage. Monolithic bureaucracy. And plenty more.

There always have been some folks who elect to isolate themselves from the masses, to renounce their mainstream-ness, to protect themselves from all that nastiness. And there always will be. Nothing wrong with doing so.

However, there's a difference between protecting oneself from "the evils of society", and blinding oneself to their very existence. Sometimes this difference is a fine line. Particularly in the case of families, where parents choose to shield from the Big Bad World not only themselves, but also their children. Protection is noble and commendable. Blindfolding, in my opinion, is cowardly and futile.

How's the serenity?
How's the serenity?
Image source: greenskullz1031 on Photobucket.

Seclusion

There are plenty of examples from bygone times, of historical abstainers from mainstream society. Monks and nuns, who have for millenia sought serenity, spirituality, abstinence, and isolation from the material. Hermits of many varieties: witches, grumpy old men / women, and solitary island-dwellers.

Religion has long been an important motive for seclusion. Many have settled on a reclusive existence as their solution to avoiding widespread evils and being closer to G-d. Other than adult individuals who choose a monastic life, there are also whole communities, composed of families with children, who live in seclusion from the wider world. The Amish in rural USA are probably the most famous example, and also one of the longest-running such communities. Many ultra-orthodox Jewish communities, particularly within present-day Israel, could also be considered as secluded.

Amish people in a coach.
Amish people in a coach.
Image source: Wikipedia: Amish.

More recently, the "commune living" hippie phenomenon has seen tremendous growth worldwide. The hippie ideology is, of course, generally an anti-religious one, with its acceptance of open relationships, drug use, lack of hierarchy, and often a lack of any formal G-d. However, the secluded lifestyle of hippie communes is actually quite similar to that of secluded religious groups. It's usually characterised by living amidst, and in tune with, nature; rejecting modern technology; and maintaining a physical distance from regular urban areas. The left-leaning members of these communities tend to strongly shun consumerism, and to promote serenity and spirituality, much like their G-d fearing comrades.

In a bubble

Like the members of these communities, I too am repulsed by many of the "evils" within the society in which we live. Indeed, the idea of joining such a community is attractive to me. It would be a pleasure and a relief to shut myself out from the blight that threatens me, and from everyone that's "infected" by it. Life would be simpler, more peaceful, more wholesome.

I empathise with those who have chosen this path in life. Just as it's tempting to succumb to all the world's vices, so too is it tempting to flee from them. However, such people are also living in a bubble. An artificial world, from which the real world has been banished.

What bothers me is not so much the independent adult people who have elected for such an existence. Despite all the faults of the modern world, most of us do at least enjoy far-reaching liberty. So, it's a free land, and adults are free to live as they will, and to blind themselves to what they will.

What does bother me, is that children are born and raised in such an existence. The adult knows what it is that he or she is shut off from, and has experienced it before, and has decided to discontinue experiencing it. The child, on the other hand, has never been exposed to reality, he or she knows only the confines of the bubble. The child is blind, but to what, it knows not.

Child in a bubble.
Child in a bubble.
Image source: CultureLab: Breaking out of the internet filter bubble.

This is a cowardly act on the part of the parents. It's cowardly because a child only develops the ability to combat and to reject the world's vices, such as consumerism or substance abuse, by being exposed to them, by possibly experimenting with them, and by making his or her own decisions. Parents that are serious about protecting their children do expose them to the Big Bad World, they do take risks; but they also do the hard yards in preparing their children for it: they ensure that their children are raised with education, discipline, and love.

Blindfolding children to the reality of wider society is also futile — because, sooner or later, whether still as children or later as adults, the Big Bad World exposes itself to all, whether you like it or not. No Amish countryside, no hippie commune, no far-flung island, is so far or so disconnected from civilisation that its inhabitants can be prevented from ever having contact with it. And when the day of exposure comes, those that have lived in their little bubble find themselves totally unprepared for the very "evils" that they've supposedly been protected from for all their lives.

Keep it balanced

In my opinion, the best way to protect children from the world's vices, is to expose them in moderation to the world's nasty underbelly, while maintaining a stable family unit, setting a strong example of rejecting the bad, and ensuring a solid education. That is, to do what the majority of the world's parents do. That's right: it's a formula that works reasonably well for billions of people, and that has been developed over thousands of years, so there must be some wisdom to it.

Obviously, children need to be protected from dangers that could completely overwhelm them. Bringing up a child in a favela environment is not ideal, and sometimes has horrific consequences, just watch City of G-d if you don't believe me. But then again, blindfolding is the opposite extreme; and one extreme can be as bad as the other. Getting the balance somewhere in between is the key.

]]>
Some observations on living in Chile https://greenash.net.au/thoughts/2013/11/some-observations-on-living-in-chile/ Tue, 19 Nov 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/11/some-observations-on-living-in-chile/ For almost two years now, I've been living in the grand metropolis of Santiago de Chile. My time here will soon be coming to an end, and before I depart, I'd like to share some of my observations regarding the particularities of life in this city and in this country – principally, as compared with life in my home town, Sydney Australia.

There are plenty of articles round and about the interwebz, aimed more at the practical side of coming to Chile: i.e. tips regarding how to get around; lists of rough prices of goods / services; and crash courses in Chilean Spanish. There are also a number of commentaries on the cultural / social differences between Chile and elsewhere – on the national psyche, and on the political / economic situation.

My endeavour is to avoid this article from falling neatly into either of those categories. That is, I'll be covering some eccentricities of Chile that aren't practical tips as such, although knowing about them may come in handy some day; and I'll be covering some anecdotes that certainly reflect on cultural themes, but that don't pretend to paint the Chilean landscape inside-out, either.

Que disfrutiiy, po.

A tale of two cities.
A tale of two cities.
Image sources: Times Journeys / 2GB.

Fin de mes

Here in Chile, all that is money-related is monthly. You pay everything monthly (your rent, all your bills, all membership fees e.g. gym, school / university fees, health / home / car insurance, etc); and you get paid monthly (if you work here, which I don't). I know that Chile isn't the only country with this modus operandi: I believe it's the European system; and as far as I know, it's the system in various other Latin American countries too.

In Australia – and as far as I know, in most English-speaking countries – there are no set-in-stone rules about the frequency with which you pay things, or with which you get paid. Bills / fees can be weekly, monthly, quarterly, annual… whatever (although rent is generally charged and is talked about as a weekly cost). Your pay cheque can be weekly, fortnightly, monthly, quarterly… equally whatever (although we talk about "how much you earn" annually, even though hardly anyone is paid annually). I guess the "all monthly" system is more consistent, and I guess it makes it easier to calculate and compare costs. However, having grown up with the "whatever" system, "all monthly" seems strange and somewhat amusing to me.

In Chile, although payment due dates can be anytime throughout the month, almost everyone receives their salary at fin de mes (the end of the month). I believe the (rough) rule is: the dosh arrives on the actual last day of the month if it's a regular weekday; or the last regular weekday of the month, if the actual last day is a weekend or public holiday (which is quite often, since Chile has a lot of public holidays – twice as many as Australia!).

This system, combined with the last-minute / impulsive form of living here, has an effect that's amusing, frustrating, and (when you think about it) depressingly predictable. As I like to say (in jest, to the locals): in Chile, it's Christmas time every end-of-month! The shops are packed, the restaurants are overflowing, and the traffic is insane, on the last day and the subsequent few days of each month. For the rest of the month, all is quiet. Especially the week before fin de mes, which is really Struggle Street for Chileans. So extreme is this fin de mes culture, that it's even busy at the petrol stations at this time, because many wait for their pay cheque before going to fill up the tank.

This really surprised me during my first few months in Chile. I used to ask: ¿Qué pasa? ¿Hay algo important hoy? ("What's going on? Is something important happening today?"). To which locals would respond: Es fin de mes! Hoy te pagan! ("It's end-of-month! You get paid today!"). These days, I'm more-or-less getting the hang of the cycle; although I don't think I'll ever really get my head around it. I'm pretty sure that, even if we did all get paid on the same day in Australia (which we don't), we wouldn't all rush straight to the shops in a mad stampede, desperate to spend the lot. But hey, that's how life is around here.

Cuotas

Continuing with the socio-economic theme, and also continuing with the "all-monthly" theme: another Chile-ism that will never cease to amuse and amaze me, is the omnipresent cuotas ("monthly instalments"). Chile has seen a spectacular rise in the use of credit cards, over the last few decades. However, the way these credit cards work is somewhat unique, compared with the usual credit system in Australia and elsewhere.

Any time you make a credit card purchase in Chile, the cashier / shop assistant will, without fail, ask you: ¿cuántas cuotas? ("how many instalments?"). If you're using a foreign credit card, like myself, then you must always answer: sin cuotas ("no instalments"). This is because, even if you wanted to pay for your purchase in chunks over the next 3-24 months (and trust me, you don't), you can't, because this system of "choosing at point of purchase to pay in instalments" only works with local Chilean cards.

Chile's current president, the multi-millionaire Sebastian Piñera, played an important part in bringing the credit card to Chile, during his involvement with the banking industry before entering politics. He's also generally regarded as the inventor of the cuotas system. The ability to choose your monthly instalments at point of sale is now supported by all credit cards, all payment machines, all banks, and all credit-accepting retailers nationwide. The system has even spread to some of Chile's neighbours, including Argentina.

Unfortunately, although it seems like something useful for the consumer, the truth is exactly the opposite: the cuotas system and its offspring, the cuotas national psyche, has resulted in the vast majority of Chileans (particularly the less wealthy among them) being permanently and inescapably mired in debt. What's more, although some of the cuotas offered are interest-free (with the most typical being a no-interest 3-instalment plan), some plans and some cards (most notoriously the "department store bank" cards) charge exhorbitantly high interest, and are riddled with unfair and arcane terms and conditions.

Última hora

Chile's a funny place, because it's so "not Latin America" in certain aspects (e.g. much better infrastructure than most of its neighbours), and yet it's so "spot-on Latin America" in other aspects. The última hora ("last-minute") way of living definitely falls within the latter category.

In Chile, people do not make plans in advance. At least, not for anything social- or family-related. Ask someone in Chile: "what are you doing next weekend?" And their answer will probably be: "I don't know, the weekend hasn't arrived yet… we'll see!" If your friends or family want to get together with you in Chile, don't expect a phone call the week before. Expect a phone call about an hour before.

I'm not just talking about casual meet-ups, either. In Chile, expect to be invited to large birthday parties a few hours before. Expect to know what you're doing for Christmas / New Year a few hours before. And even expect to know if you're going on a trip or not, a few hours before (and if it's a multi-day trip, expect to find a place to stay when you arrive, because Chileans aren't big on making reservations).

This is in stark contrast to Australia, where most people have a calendar to organise their personal life (something extremely uncommon in Chile), and where most peoples' evenings and weekends are booked out at least a week or two in advance. Ask someone in Sydney what their schedule is for the next week. The answer will probably be: "well, I've got yoga tomorrow evening, I'm catching up with Steve for lunch on Wednesday, big party with some old friends on Friday night, beach picnic on Saturday afternoon, and a fancy dress party in the city on Saturday night." Plus, ask them what they're doing in two months' time, and they'll probably already have booked: "6 nights staying in a bungalow near Batemans Bay".

The última hora system is both refreshing and frustrating, for a planned-ahead foreigner like myself. It makes you realise just how regimented, inflexible, and lacking in spontenaeity life can be in your home country. But, then again, it also makes you tear your hair out, when people make zero effort to co-ordinate different events and to avoid clashes. Plus, it makes for many an awkward silence when the folks back home ask the question that everybody asks back home, but that nobody asks around here: "so, what are you doing next weekend?" Depends which way the wind blows.

Sit down

In Chile (and elsewhere nearby, e.g. Argentina), you do not eat or drink while standing. In most bars in Chile, everyone is sitting down. In fact, in general there is little or no "bar" area, in bars around here; it's all tables and chairs. If there are no tables or chairs left, people will go to a different bar, or wait for seats to become vacant before eating / drinking. Same applies in the home, in the park, in the garden, or elsewhere: nobody eats or drinks standing up. Not even beer. Not even nuts. Not even potato chips.

In Australia (and in most other English-speaking countries, as far as I know), most people eat and drink while standing, in a range of different contexts. If you're in a crowded bar or pub, eating / drinking / talking while standing is considered normal. Likewise for a big house party. Same deal if you're in the park and you don't want to sit on the grass. I know it's only a little thing; but it's one of those little things that you only realise is different in other cultures, after you've lived somewhere else.

It's also fairly common to see someone eating their take-away or other food while walking, in Australia. Perhaps some hot chips while ambling along the beach. Perhaps a sandwich for lunch while running (late) to a meeting. Or perhaps some lollies on the way to the bus stop. All stuff you wouldn't blink twice at back in Oz. In Chile, that is simply not done. Doesn't matter if you're in a hurry. It couldn't possibly be such a hurry, that you can't sit down to eat in a civilised fashion. The Chilean system is probably better for your digestion! And they have a point: perhaps the solution isn't to save time by eating and walking, but simply to be in less of a hurry?

Do you see anyone eating / drinking and standing up? I don't.
Do you see anyone eating / drinking and standing up? I don't.
Image source: Dondequieroir.

Walled and shuttered

One of the most striking visual differences between the Santiago and Sydney streetscapes, in my opinion, is that walled-up and shuttered-up buildings are far more prevalent in the former than in the latter. Santiago is not a dangerous city, by Latin-American or even by most Western standards; however, it often feels much less secure than it should, particularly at night, because often all you can see around you is chains, padlocks, and sturdy grilles. Chileans tend to shut up shop Fort Knox-style.

Walk down Santiago's Ahumada shopping strip in the evening, and none of the shopfronts can be seen. No glass, no lit-up signs, no posters. Just grey steel shutters. Walk down Sydney's Pitt St in the evening, and – even though all the shops close earlier than in Santiago – it doesn't feel like a prison, it just feels like a shopping area after-hours.

In Chile, virtually all houses and apartment buildings are walled and gated. Also, particularly ugly in my opinion, schools in Chile are surrounded by high thick walls. For both houses and schools, it doesn't matter if they're upper- or lower-class, nor what part of town they're in: that's just the way they build them around here. In Australia, on the other hand, you can see most houses and gardens from the street as you go past (and walled-in houses are criticised as being owned by "paranoid people"); same with schools, which tend to be open and abundant spaces, seldom delimiting their boundary with anything more than a low mesh fence.

As I said, Santiago isn't a particularly dangerous city, although it's true that robbery is far more common here than in Sydney. The real difference, in my opinion, is that Chileans simply don't feel safe unless they're walled in and shuttered up. Plus, it's something of a vicious cycle: if everyone else in the city has a wall around their house, and you don't, then chances are that your house will be targeted, not because it's actually easier to break into than the house next door (which has a wall that can be easily jumped over anyway), but simply because it looks more exposed. Anyway, I will continue to argue to Chileans that their country (and the world in general) would be better with less walls and less barriers; and, no doubt, they will continue to stare back at me in bewilderment.

Santiago's central shopping strip on Sunday: grey on grey.
Santiago's central shopping strip on Sunday: grey on grey.
Image source: eszsara (Flickriver).

In summary

So, there you have it: a few of my random observations about life in Santiago, Chile. I hope you've found them educational and entertaining. Overall, I've enjoyed my time in this city; and while I'm sometimes critical of and poke fun at Santiago's (and Chile's) peculiarities, I'm also pretty sure I'll miss then when I'm gone. If you have any conclusions of your own regarding life in this big city, feel free to share them below.

]]>
How smart are smartphones? https://greenash.net.au/thoughts/2013/11/how-smart-are-smartphones/ Sat, 02 Nov 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/11/how-smart-are-smartphones/ As of about two months ago, I am a very late and reluctant entrant into the world of smartphones. All that my friends and family could say to me, was: "What took you so long?!" Being a web developer, everyone expected that I'd already long since jumped on the bandwagon. However, I was actually in no hurry to make the switch.

Techomeanies: sad dinosaur.
Techomeanies: sad dinosaur.
Image source: Sticky Comics.

Being now acquainted with my new toy, I believe I can safely say that my reluctance was not (entirely) based on my being a "phone dinosaur", an accusation that some have levelled at me. Apart from the fact that they offer "a tonne of features that I don't need", I'd assert that the current state-of-the-art in smartphones suffers some serious usability, accessibility, and convenience issues. In short: these babies ain't so smart as their purty name suggests. These babies still have a lotta growin' up to do.

Hello Operator, how do I send a telegram to Aunt Gertie in Cheshire using Android 4.1.2?
Hello Operator, how do I send a telegram to Aunt Gertie in Cheshire using Android 4.1.2?
Image source: sondasmcschatter.

Touchy

Mobile phones with few buttons are all the rage these days. This is principally thanks to the demi-g-ds at Apple, who deign that we mere mortals should embrace all that is white with chrome bezel.

Apple has been waging war on the button for some time. For decades, the Mac mouse has been a single-button affair, in contrast to the two- or three-button standard PC rodent. Since the dawn of the iEra, a single (wheel-like) button has predominated all iShtuff. (For a bit of fun, watch how this single-button phenomenon reached its unholy zenith with the unveiling of the MacBook Wheel). And, most recently, since Apple's invention of the i(AmTheOneTrue)Phone (of which all other smartphones are but a paltry and pathetic imitation attempted by mere mortals), smartphones have been almost by definition "big on touch-screen, low on touch-button".

Oh sorry, did I forget to mention that I can't resist bashing the Cult of Mac at every opportunity?
Oh sorry, did I forget to mention that I can't resist bashing the Cult of Mac at every opportunity?
Image source: Crazy Art Ideas.

I'm not happy about this. I like buttons. You can feel buttons. There is physical space between each button. Buttons physically move when you press them.

You can't feel the icons on a touch screen. A touch screen is one uninterrupted physical surface. And a touch screen doesn't provide any tactile response when pressed.

There is active ongoing research in this field. Just this year, the world's first fully-functional bumpy touchscreen prototype was showcased, by California-based Tactus. However, so far no commercial smartphones have been developed using this technology. Hopefully, in another few years' time, the situation will be different; but for the current state-of-the-art smartphones, the lack of tactile feedback in the touch screens is a serious usability issue.

Related to this, is the touch-screen keyboard that current-generation smartphones provide. Seriously, it's a shocker. I wouldn't say I have particularly fat fingers, nor would I call myself a luddite (am I not a web developer?). Nevertheless, touch-screen keyboards frustrate the hell out of me. And, as I understand it, I'm not alone in my anguish. I'm far too often hitting a letter adjacent to the one I poked. Apart from the lack of extruding keys / tactile feedback, each letter is also unmanageably small. It takes me 20 minutes to write an e-mail on my smartphone, that I can write in about 4 minutes on my laptop.

Touch screens have other issues, too. Manufacturers are struggling to get touch sensitivity level spot-on: from my personal experience, my Galaxy S3 is far too hyper-sensitive, even the lightest brush of a finger sets it off; whereas my fiancée's iPhone 4 is somewhat under-sensitive, it almost never responds to my touch until I start poking it hard (although maybe it just senses my anti-Apple vibes and says STFU). The fragility of touch screens is also of serious concern – as a friend of mine recently joked: "these new phones are delicate little princesses". Fortunately, I haven't had any shattered or broken touch-screen incidents as yet (only a small superficial scratch so far); but I've heard plenty of stories.

Before my recent switch to Samsung, I was a Nokia boy for almost 10 years – about half that time (the recent half) with a 6300; and the other half (the really good ol' days) with a 3100. Both of those phones were "bricks", as flip-phones never attracted me. Both of them were treated like cr@p and endured everything (especially the ol' 3100, which was a wonderfully tough little bugger). Both had a regular keypad (the 3100's keypad boasted particularly rubbery, well-spaced buttons), with which I could write text messages quickly and proficiently. And both sported more button real-estate than screen real-estate. All good qualities that are naught to be found in the current crop of touch-monsters.

Great, but can you make calls with it?

After the general touch-screen issues, this would have to be my next biggest criticism of smartphones. Big on smart, low on phone.

But hey, I guess it's better than
But hey, I guess it's better than "big on shoe, low on phone".
Image source: Ars Technica.

Smartphones let you check your email, update your Facebook status, post your phone-camera-taken photos on Instagram, listen to music, watch movies, read books, find your nearest wood-fired pizza joint that's open on Mondays, and much more. They also, apparently, let you make and receive phone calls.

It's not particularly hard to make calls with a smartphone. But, then again, it's not as easy as it was with "dumb phones", nor is it as easy as it should be. On both of the smartphones that I'm now most familiar with (Galaxy S3 and iPhone 4), calling a contact requires more than the minimum two clicks ("open contacts", and "press call"). On the S3, this can be done with a click and a "swipe right", which (although I've now gotten used to it) felt really unintuitive to begin with. Plus, there's no physical "call" button, only a touch-screen "call" icon (making it too easy to accidentally message / email / Facebook someone when you meant to call them, and vice-versa).

Receiving calls is more problematic, and caused me significant frustration to begin with. Numerous times, I've rejected a call when I meant to answer it (by either touching the wrong icon, or by the screen getting brushed as I extract the phone from my pocket). And really, Samsung, what crazy-a$$ Gangman-style substances were you guys high on, when you decided that "hold and swipe in one direction to answer, hold and swipe in the other direction to reject" was somehow a good idea? The phone is ringing, I have about five seconds, so please don't make me think!

In my opinion, there REALLY should be a physical "answer / call" button on all phones, period. And, on a related note, rejecting calls and hanging up (which are tasks just as critical as are calling / answering) are difficulty-fraught too; and there also REALLY should be a physical "hang up" button on all phones, period. I know that various smartphones have had, and continue to have, these two physical buttons; however, bafflingly, neither the iPhone nor the Galaxy include them. And once again, Samsung, one must wonder how many purple unicorns were galloping over the cubicles, when you decided that "let's turn off the screen when you want to hang up, and oh, if by sheer providence the screen is on when you want to hang up, the hang-up button could be hidden in the slid-up notification bar" was what actual carbon-based human lifeforms wanted in a phone?

Hot and shagged out

Two other critical problems that I've noticed with both the Galaxy and the iPhone (the two smartphones that are currently considered the crème de la crème of the market, I should emphasise).

Firstly, they both start getting quite hot, after just a few minutes of any intense activity (making a call, going online, playing games, etc). Now, I understand that smartphones are full-fledged albeit pocked-sized computers (for example, the Galaxy S3 has a quad-core processor and 1-2GB of RAM). However, regular computers tend to sit on tables or floors. Holding a hot device in your hands, or keeping one in your pocket, is actually very uncomfortable. Not to mention a safety hazard.

Secondly, there's the battery-life problem. Smartphones may let you do everything under the sun, but they don't let you do it all day without a recharge. It seems pretty clear to me that while smartphones are a massive advancement compared to traditional mobiles, the battery technology hasn't advanced anywhere near on par. As many others have reported, even with relatively light use, you're lucky to last a full day without needing to plug your baby in for some intravenous AC TLC.

In summary

I've had a good ol' rant, about the main annoyances I've encountered during my recent initiation into the world of smartphones. I've focused mainly on the technical issues that have been bugging me. Various online commentaries have discussed other aspects of smartphones: for example, the oft-unreasonable costs of owning one; and the social and psychological concerns, such as aggression / meanness, impatience / chronic boredom, and endemic antisocial behaviour (that last article also mentions another concern that I've written about before, how GPS is eroding navigational ability). While in general I agree with these commentaries, personally I don't feel they're such critical issues – or, to be more specific, I guess I feel that these issues already existed and already did their damage in the "traditional mobile phone" era, and that smartphones haven't worsened things noticeably. So, I won't be discussing those themes in this article.

Anyway, despite my scathing criticism, the fact is that I'm actually very impressed with all the cool things that smartphones can do; and yes, although I was dragged kicking and screaming, I have also succumbed and joined the "dark side" myself, and I must admit that I've already made quite thorough use of many of my smartphone's features. Also, it must be remembered that – although many people already claim that they "can hardly remember what life was like before smartphones" – this is a technology that's still in its infancy, and it's only fair and reasonable that there are still numerous (technical and other) kinks yet to be ironed out.

]]>
Symfony2: as good as PHP gets? https://greenash.net.au/thoughts/2013/10/symfony2-as-good-as-php-gets/ Wed, 16 Oct 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/10/symfony2-as-good-as-php-gets/ I've been getting my hands dirty with Symfony2 of late. At the start of the year, I was introduced to it when I built an app using Silex (a Symfony2 distribution). The special feature of my app was that it allows integration between Silex and Drupal 7.

More recently, I finished another project, which I decided to implement using Symfony2 Standard Edition. Similar to my earlier project, it had the business requirement that it needed tight integration with a Drupal site; so, for this new project, I decided to write a Symfony2 Drupal integration bundle.

Overall, I'm quite impressed with Symfony2 (in its various flavours), and I enjoy coding in it. I've been struggling to enjoy coding in Drupal (and PHP in general) – the environment that I know best – for quite some time. That's why I've been increasingly turning to Django (and other Python frameworks, e.g. Flask), for my dev projects. Symfony2 is a very welcome breath of fresh air in the PHP world.

However, I can't help but think: is Symfony2 "as good as PHP gets"? By that, I mean: Symfony2 appears to have borrowed many of the best practices that have evolved in the non-PHP world, and to have implemented them about as well as they physically can be implemented in PHP (indeed, the same could be said of PHP itself of late). But, PHP being so inferior to most of its competitors in so many ways, PHP implementations are also doomed to being inferior to their alternatives.

Pragmatism

I try to be a pragmatic programmer – I believe that I'm getting more pragmatic, and less sentimental, as I continue to mature as a programmer. That means that my top concerns when choosing a framework / environment are:

  • Which one helps me get the job done in the most efficient manner possible? (i.e. which one costs my client the least money right now)
  • Which one best supports me in building a maintainable, well-documented, re-usable solution? (i.e. which one will cost my client the least money in the long-term)
  • Which one helps me avoid frustrations such as repetitive coding, reverse-engineering, and manual deployment steps? (i.e. which one costs me the least headaches and knuckle-crackings)

Symfony2 definitely gets more brownie points from me than Drupal does, on the pragmatic front. For projects whose data model falls outside the standard CMS data model (i.e. pages, tags, assets, links, etc), I need an ORM (which Drupal's field API is not). For projects whose business logic falls outside the standard CMS business logic model (i.e. view / edit pages, submit simple web forms, search pages by keyword / tag / date, etc), I need a request router (which Drupal's menu API is not). It's also a nice added bonus to have a view / template system that gives me full control over the output without kicking and screaming (as is customary for Drupal's theme system).

However, Symfony2 Standard Edition is a framework, and Drupal is a CMS. Apples and oranges.

Django is a framework. It's also been noted already, by various other people, that many aspects of Symfony2 were inspired by their counterparts in Django (among other frameworks, e.g. Ruby on Rails). So, how about comparing Symfony2 with Django?

Although they're written in different languages, Symfony2 and Django actually have quite a lot in common. In particular, Symfony2's Twig template engine is syntactically very similar to the Django template language; in fact, it's fairly obvious that Twig's syntax was ripped off from inspired by that of Django templates (Twig isn't the first Django-esque template engine, either, so I guess that if imitation is the highest form of flattery, then the Django template language should be feeling thoroughly flattered by now).

The request routing / handling systems of Symfony2 and Django are also fairly similar. However, there are significant differences in their implementation styles; and in my personal opinion, the Symfony2 style feels more cumbersome and less elegant than the Django style.

For example, here's the code you'd need to implement a basic 'Hello World' callback:

In Symfony2

app/AppKernel.php (in AppKernel->registerBundles()):

<?php
$bundles = array(
    // ...

    new Hello\Bundle\HelloBundle(),
);

app/config/routing.yml:

hello:
    resource: "@HelloBundle/Controller/"
    type:     annotation
    prefix:   /

src/Hello/Bundle/Controller/DefaultController.php:

<?php
namespace Hello\Bundle\Controller;

use Symfony\Component\HttpFoundation\Response;

class DefaultController extends Controller
{
    /**
     * @Route("/")
     */
    public function indexAction()
    {
        return new Response('Hello World');
    }
}

In Django

project/settings.py:

INSTALLED_APPS = [
    # ...

    'hello',
]

project/urls.py:

from django.conf.urls import *

from hello.views import index

urlpatterns = patterns('',
    # ...

    url(r'^$', index, name='hello'),
)

project/hello/views.py:

from django.http import HttpResponse

def index(request):
    return HttpResponse("Hello World")

As you can see above, the steps involved are basically the same for each system. First, we have to register with the framework the "thing" that our Hello World callback lives in: in Symfony2, the "thing" is called a bundle; and in Django, it's called an app. In both systems, we simply add it to the list of installed / registered "things". However, in Symfony2, we have to instantiate a new object, and we have to specify the namespace path to the class; whereas in Django, we simply add the (path-free) name of the "thing" to a list, as a string.

Next, we have to set up routing to our request callback. In Symfony2, this involves using a configuration language (YAML), rather than the framework's programming language (PHP); and it involves specifying the "path" to the callback, as well as the format in which the callback is defined ("annotation" in this case). In Django, it involves importing the callback "callable" as an object, and adding it to the "urlpatterns" list, along with a regular expression defining its URL path.

Finally, there's the callback itself. In Symfony2, the callback lives in a FooController.php file within a bundle's Controller directory. The callback itself is an "action" method that lives within a "controller" class (you can have multiple "actions", in this example there's just one). In Django, the callback doesn't have to be a method within a class: it can be any Python "callable", such as a "class object"; or, as is the case here, a simple function.

I could go on here, and continue with more code comparisons (e.g. database querying / ORM system, form system, logging); but I think what I've shown is sufficient for drawing some basic observations. Feel free to explore Symfony2 / Django code samples in more depth if you're still curious.

Funny language

Basically, my criticism is not of Symfony2, as such. My criticism is more of PHP. In particular, I dislike both the syntax and the practical limitations of the namespace system that was introduced in PHP 5.3. I've blogged before about what bugs me in a PHP 5.3-based framework, and after writing that article I was accused that my PHP 5.3 rants were clouding my judgement of the framework. So, in this article I'd like to more clearly separate language ranting from framework ranting.

Language rant

In the PHP 5.3+ namespace system:

  • The namespace delimiter is the backslash character; whereas in other (saner) languages it's the dot character
  • You have to specify the "namespace path" using the "namespace" declaration at the top of every single file in your project that contains namespaced classes; whereas in other (saner) languages the "namespace path" is determined automatically based on directory structure
  • You can only import namespaces using their absolute path, resulting in overly verbose "use" declarations all over the place; wheras in other (saner) languages relative (and wildcard) namespace imports are possible

Framework rant

In Symfony2:

  • You're able to define configuration (e.g. routing callbacks) in multiple formats, with the preferred format being YAML (although raw PHP configuration is also possible), resulting in an over-engineered config system, and unnecessary extra learning for an invented format in order to perform configuration in the default way
  • Only a class method can be a routing callback, a class itself or a stand-alone function cannot be a callback, as the routing system is too tightly coupled with PHP's class- and method-based namespace system
  • An overly complex and multi-levelled directory structure is needed for even the simplest projects, and what's more, overly verbose namespace declarations and import statements are found in almost every file; this is all a reflection of Symfony2's dependence on the PHP 5.3+ namespace system

In summary

Let me repeat: I really do think that Symfony2 is a great framework. I've done professional work with it recently. I intend to continue doing professional work with it in the future. It ticks my pragmatic box of supporting me in building a maintainable, well-documented, re-usable solution. It also ticks my box of avoiding reverse-engineering and manual deployment steps.

However, does it help me get the job done in the most efficient manner possible? If I have to work in PHP, then yes. If I have the choice of working in Python instead, then no. And does it help me avoid frustrations such as repetitive coding? More-or-less: Symfony2 project code isn't too repetitive, but it certainly isn't as compact as I'd like my code to be.

Symfony2 is brimming with the very best of what cutting-edge PHP has to offer. But, at the same time, it's hindered by its "PHP-ness". I look forward to seeing the framework continue to mature and to evolve. And I hope that Symfony2 serves as an example to all programmers, working in all languages, of how to build the most robust product possible, within the limits of that product's foundations and dependencies.

]]>
Current state of the Cape to Cairo Railway https://greenash.net.au/thoughts/2013/08/current-state-of-the-cape-to-cairo-railway/ Thu, 01 Aug 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/08/current-state-of-the-cape-to-cairo-railway/ In the late 19th century, the British-South-African personality Cecil Rhodes dreamed of a complete, uninterrupted railway stretching from Cape Town, South Africa, all the way to Cairo, Egypt. During Rhodes's lifetime, the railway extended as far north as modern-day Zimbabwe – which was in that era known by its colonial name Rhodesia (in honour of Rhodes, whose statesmanship and entrepreneurism made its founding possible). A railway traversing the entire north-south length of Africa was an ambitious dream, for an ambitious man.

Rhodes's dream remains unfulfilled to this day.

The famous
The famous "Rhodes Colossus", superimposed upon the present-day route of the Cape to Cairo Railway.
"The Rhodes Colossus" illustration originally from Punch magazine, Vol. 103, 10 Dec 1892; image courtesy of Wikimedia Commons. Africa satellite image courtesy of Google Earth.

Nevertheless, significant additions have been made to Africa's rail network during the interluding century; and, in fact, only a surprisingly small section of the Cape to Cairo route remains bereft of the Iron Horse's footprint.

Although both information about – (a) the historical Cape to Cairo dream; and (b) the history / current state of the route's various railway segments – abound, I was unable to find any comprehensive study of the current state of the railway in its entirety.

This article, therefore, is an endeavour to examine the current state of the full Cape to Cairo Railway. As part of this study, I've prepared a detailed map of the route, which marks in-service sections, abandoned sections, and missing sections. The map has been generated from a series of KML files, which I've made publicly available on GitHub, and for which I welcome contributions in the form of corrections / tweaks to the route.

Southern section

As its name suggests, the line begins in Cape Town, South Africa. The southern section of the railway encompasses South Africa itself, along with the nearby countries that have historically been part of the South African sphere of influence: that is, Botswana, Zimbabwe, and Zambia.

Southern section of the Cape to Cairo Railway: Cape Town to Kapiri Mposhi.
Southern section of the Cape to Cairo Railway: Cape Town to Kapiri Mposhi.

The first segment – Cape Town to Johannesburg – is also the oldest, the best-maintained, and the best-serviced part of the entire route. The first train travelled this segment in 1892. There has been continuous service ever since. It's the only train route in all of Africa that can honestly claim to provide a "European standard" of passenger service, between the two major cities that it links. That is, there are numerous classes of service operating on the line – ranging from basic inter-city commuter trains, to business-style fast trains, to luxury sleeper trains – running several times a day.

So, the first leg of the railway is the one that we should be least worried about. Hence, it's marked in green on the map. This should come as no surprise, considering that South Africa has the best-developed infrastructure in all of Africa (by a long shot), as well as Africa's largest economy.

After Johannesburg, we continue along the railway that was already fulfilling Cecil Rhodes's dream before his death. This segment runs through modern-day Botswana, which was previously known as Bechuanaland Protectorate. From Johannesburg, it connects to the city of Mafeking, which was the capital of the former Bechuanaland Protectorate, but which today is within South Africa (where it is a regional capital). The line then crosses the border into Botswana, passes through the capital Gaborone, and continues to the city of Francistown in Botswana's north-east.

Unfortunately, since the opening of the Beitbridge Bulawayo Railway in Zimbabwe in 1999 (providing a direct train route between Zimbabwe and South Africa for the first time), virtually all regular passenger service on this segment (and hence, virtually all regular passenger service on Botswana's train network) has been cancelled. The track is still being maintained, and (apart from some freight trains) there are still occasional luxury tourist trains using the route. However, it's unclear if there are still any regular passenger services between Johannesburg and Mafeking (if there are, they're very few); and sources indicate that there are no regular passenger services at all between Mafeking and Francistown. Hence, the segment is marked in yellow on the map.

(I should also note that the new direct train route from South Africa to Zimbabwe, does actually provide regular passenger service, from Johannesburg to Messina, and then from Beitbridge to Bulawayo, with service missing only in the short border crossing between Messina and Beitbridge. However, I still consider the segment via Botswana to be part of the definitive "Cape to Cairo" route: because of its historical importance; and because only quite recently has service ceased on this segment and has an alternative segment been open.)

From Francistown onwards, the situation is back in the green. There is a passenger train from Francistown to Bulawayo, that runs three times a week. I should also mention here, that Bulawayo is quite a significant spot on the Cape to Cairo Railway, as (a) the grave of Cecil Rhodes can be found atop "World's View", a panoramic hilltop in nearby Matobo National Park; and (b) Bulawayo was the first city that the railway reached in (former) Rhodesia, and to this day it remains Zimbabwe's rail hub. Bulawayo is also home to a railway museum.

For the remainder of the route through Zimbabwe, the line remains in the green. There's a daily passenger service from Bulawayo to Victoria Falls. Sadly, this spectacular leg of the route has lost much of its former glory: due to Zimbabwe's recent economic and political woes, the trains are apparently looking somewhat the worse for wear. Nevertheless, the service continues to be popular and reasonably reliable.

The green is briefly interrupted by a patch of yellow, at the border crossing between Zimbabwe and Zambia. This is because there has been no passenger service over the famous Victoria Falls Bridge – which crosses the Zambezi River at spraying distance from the colossal waterfall, connecting the towns of Victoria Falls and Livingstone – more-or-less since the 1970s. Unless you're on board one of the infrequent luxury tourist trains that still traverse the bridge, it must be crossed on foot (or using local transport). It should also be noted that although the bridge is still most definitely intact and looking solid, it's more than 100 years old, and experts have questioned whether it's receiving adequate care and maintenance.

Victoria Falls Bridge: a marvel of modern engineering, straddles one of the world's natural wonders.
Victoria Falls Bridge: a marvel of modern engineering, straddles one of the world's natural wonders.
Image sourced from: Car Hire Victoria Falls.

Once in Zambia – formerly known as Northern Rhodesia – regular passenger services continue north to the capital, Lusaka; and from there, onward to the crossroads town of Kapiri Mposhi. It's here that the southern portion of the modern-day Cape to Cairo railway ends, since Kapiri Mposhi is the meeting-point of the colonial-era, British-built, South-African / Rhodesian railway network, and a modern-era East-African rail link that was unanticipated in Rhodes's plan.

I should also mention here that the colonial-era network continues north from Kapiri Mposhi, crossing the border with modern-day DR Congo (formerly known as the Belgian Congo), and continuing up to the shores of Lake Tanganyika, where it terminates at the town of Kalemie. The plan in the colonial era was that the Cape to Cairo passenger link would continue north via the Great Lakes in this region of Africa – in the form of lake / river ferries, up to Lake Albert, on the present-day DR Congo / Ugandan border – after which the rail link would resume, up to Egypt via Sudan.

However, I don't consider this segment to be part of the definitive "Cape to Cairo" route, because: (a) further rail links between the Great Lakes, up to Lake Albert, were never built; (b) the line running through eastern DR Congo, from the Zambian border to Kalemie on Lake Tanganyika, is apparently in serious disrepair; and (c) an alternative continuous rail link has existed, since the 1970s, via East Africa, and the point where this link terminates in modern-day Uganda is north of Lake Albert anyway. Therefore, the DR Congo – Great Lakes segment is only being mentioned here as an anecdote of history; and we now turn our attention to the East African network.

Eastern section

The Eastern section of the railway is centred in modern-day Tanzania and Kenya, although it begins and ends within the inland neighbours of these two coastal nations – Zambia and Uganda, respectively. This region, much like Southern Africa, was predominantly ruled under British colonialism in the 19th century (which is why Kenya, the region's hub, was formerly known as British East Africa). However, modern-day Tanzania (formerly called Tanganyika, before the union of Tanganyika with Zanzibar) was originally German East Africa, before becoming a British protectorate in the 20th century.

Eastern section of the Cape to Cairo Railway: Kapiri Mposhi to Gulu.
Eastern section of the Cape to Cairo Railway: Kapiri Mposhi to Gulu.

Kapiri Mposhi, in Zambia, is the start of the TAZARA Railway; this railway runs through the north-east of Zambia, crosses the border to Tanzania near Mbeya, and finishes on the Indian Ocean coast at Dar es Salaam, Tanzania's largest city.

The TAZARA is the newest link in the Cape to Cairo railway network: it was built and financed by the Chinese, and was opened in 1976. It's the only line in the network – and one of the only railway lines in all of Africa – that was built (a) by non-Europeans; and (b) in the post-colonial era. It was not envisioned by Rhodes (nor by his contemporaries), who wanted the line to pass through wholly British-controlled territory (Tanzania was still German East Africa in Rhodes's era). The Zambians wanted it, in order to alleviate their dependence (for international transport) on their southern neighbours Rhodesia and South Africa, with whom tensions were high in the 1970s, due to those nations' Apartheid governments. The line has been in regular operation since opening; hence, it's marked in green on the map.

The TAZARA: a
The TAZARA: a "modern" rail link... African style.
Image source: Mzuzu Transit.

Although the TAZARA line doesn't quite touch the other Tanzanian railway lines that meet in Dar es Salaam, I haven't marked any gap in the route at Dar es Salaam. This is for two reasons. Firstly, from what I can tell (by looking at maps and satellite imagery), the terminus of the TAZARA in Dar es Salaam is physically separated from the other lines, by a distance of less than two blocks, i.e. a negligible amount. Secondly, the TAZARA is (as of 1998) physically connected to the other Tanzanian railway lines, at a junction near the town of Kidatu, and there is a cargo transshipment facility at this location. However, I don't believe there's any passenger service from Kidatu to the rest of the Tanzanian network (only cargo trains). So, the Kidatu connection is being mentioned here only as an anecdote; in my opinion, the definitive "Cape to Cairo" route passes through, and connects at, Dar es Salaam.

From Dar es Salaam, the line north is part of the decaying colonial-era Tanzanian rail network. This line extends up to the city of Arusha; the part that we're interested in ends at Moshi (east of Arusha), from where another line branches off, crossing the border into Kenya. Sadly, there has been no regular passenger service on the Arusha line for many years; therefore, nor is there any service to Moshi.

After crossing the Kenyan border, the route passes through the town of Taveta, before continuing on to Voi; here, there is a junction with the most important train line in Kenya: that which connects Mombasa and Nairobi. As with the Arusha line, the Moshi – Voi line has also been bereft of regular passenger service for many years. This entire portion of the rail network appears to be in a serious state of neglect. If there are any trains running in this area, they would be occasional freight trains; and if any track maintenance is being performed on these lines, it would be the bare minimum. Therefore, the full segment from Dar es Salaam to Voi is marked in yellow on the map.

From Voi, there are regular passenger services on the main Kenyan rail line to Nairobi; and onward from Nairobi, there are further passenger services (which appear to be less reliable, but regular nonetheless) to the city of Kisumu, which borders Lake Victoria. The part of this route that we're interested in ends at Nakuru (about halfway between Nairobi and Kisumu), from where another line branches off towards Uganda. The route through Kenya, from Voi to Nakuru, is therefore in the green.

After Nakuru, the line meanders its way towards the Ugandan border; and at Tororo (a city on the Ugandan side), it connects with the Ugandan network. There is apparently no longer any passenger service available from Nakuru to Tororo – i.e. there is no service between Kenya and Uganda. As such, this segment is marked in yellow.

The once-proud Ugandan railway network today lies largely abandoned, a victim of Uganda's tragic history of dictatorship, bloodshed and economic disaster since the 1970s. The only inter-city line that maintains regular passenger service, is the main line from the capital, Kampala, to Tororo. As this line terminates at Kampala, on the shores of Lake Victoria (like the Kenyan line to Kisumu), it is of little interest to us.

From Tororo, Uganda's northern railway line stretches north and then west across the country in a grand arc, before terminating at Pakwach, the point where the Albert Nile river begins, adjacent to Lake Albert. (This line supposedly once continued from Pakwach to Arua; however, I haven't been able to find this extension marked on any maps, nor visible in any satellite imagery). The northernmost point of this railway line is at Gulu; and so, it is the segment of the line up to Gulu that interests us.

Sadly, the entire segment from Tororo to Gulu appears to be abandoned; whether there is even freight service today seems doubtful; thus, this segment is marked in yellow. And, doubly sad, Gulu is also the point at which a continuous, uninterrupted rail network all the way from Cape Town, comes to its present-day end. No rail line was ever constructed north of Gulu in Uganda. Therefore, it is at Gulu that the East African portion of the Cape to Cairo Railway bids us farewell.

Northern section

The northern section of the railway is mainly within Sudan and Egypt – although we'll be tracking (the missing section of) the route from northern Uganda; and, since 2011, the route also passes through the newly-independent South Sudan. As with its southern and eastern African counterparts, the northern part of the railway was primarily built by the British, during their former colonial rule in the region.

Northern section of the Cape to Cairo Railway: Gulu to Cairo.
Northern section of the Cape to Cairo Railway: Gulu to Cairo.

We pick up from where we finished in the previous section: Gulu in northern Uganda. As has already been mentioned: from Gulu, we hit the first (of only two) – and the most significant – of the missing links in the Cape to Cairo Railway. The next point where the railway begins again, is the city of Wau, located in north-western South Sudan. Therefore, this segment of the route is marked in red on the map. In the interests of at least marking some form of transport along the missing link, the red line follows the main highway route through the region: the Ugandan A104 highway from Gulu north to the border; and from there, at the city of Nimule (just over the border in South Sudan), the South Sudanese A43 highway to Juba (the capital), and then on to Wau (this highway route is about 1,000km in total).

There has been no shortage of discussion, both past and present, regarding plans to bridge this important gap in the rail network. There have even been recent official announcements by the governments of Uganda and of South Sudan, declaring their intention to build a new rail segment from Gulu to Wau. However, there hasn't been any concrete action since the present-day railheads were established about 50 years ago; and, considering that northern Uganda / South Sudan is one of the most troubled regions in the world today, I wouldn't hold my breath waiting for any fresh steel to appear on the ground (not to mention waiting for repairs of the existing neglected / war-damaged train lines). The folks over there have plenty of other, more urgent matters to attend to.

Wau is the southern terminus of the Sudanese rail network. From Wau, the line heads more-or-less straight up, crossing the border from South Sudan into Sudan, and joining the Khartoum – Darfur line at Babanusa. The Babanusa – Wau line was one of the last train lines to be completed in Sudan, opening in 1962 (around the same time as the Tororo – Gulu – Pakwach line opened in neighbouring Uganda). I found a colourful account of a passenger journey along this line, from around 2000. As I understand it, shortly after this time, the line was damaged by mines and explosives, a victim of the civil war. The line is supposedly rehabilitated, and passenger service has ostensibly resumed – however, personally I'm not convinced that this is the case. Therefore, this segment is marked in yellow on the map.

Similarly, the remaining segment of rail onwards to the capital – Babanusa to Khartoum – was apparently damaged during the civil war (that's on top of the line's ageing and dismally-maintained state). There are supposedly efforts underway to rehabilitate this line (along with the rest of the Sudanese rail network in general), and to restore regular services along it. I haven't been able to confirm whether passenger services have yet been restored; therefore, this segment is also marked in yellow on the map.

From the Sudanese capital Khartoum, the country's principal train line traverses the rest of the route north, running along the banks of the Upper Nile for about half the route, before making a beeline across the harsh expanse of the Nubian Desert, and terminating just before the Egyptian border at the town of Wadi Halfa, on the shores of Lake Nasser (the Sudanese side of which is called Lake Nubia). Although trains do appear to get suspended for long-ish intervals, this is the best-maintained route in war-ravaged Sudan, and it appears that regular passenger services are operating from Khartoum to Wadi Halfa. Therefore, this segment is marked in green on the map.

The border crossing from Sudan into Egypt is the second of the two missing links in the Cape to Cairo Railway. In fact, there isn't even a road connecting the two nations, at least not anywhere near the Nile / Lake Nasser. However, this missing link is of less concern, because: (a) the distance is much less (approximately 350km); and (b) Lake Nasser is a large and easily navigable body of water, with regular ferry services connecting Wadi Halfa in Sudan with Aswan in Egypt. Indeed, the convenience and the efficiency of the ferry service (along with the cargo services operating on the lake) is the very reason why nobody's ever bothered to build a rail link through this segment. So, this segment is marked in red on the map: the red line more-or-less follows the ferry route over the lake.

Aswan is Egypt's southernmost city; this has been the case for millenia, since it was the southern frontier of the realm of the Pharoahs, stretching back to ancient times. Aswan is also the southern terminus of the Egyptian rail network's main line; from here, the line snakes its way north, tracing the curves of the heavily-populated Nile River valley, all the way to Cairo, after which the vast Nile Delta begins.

Train service running alongside the famous Nile River.
Train service running alongside the famous Nile River.
Image source: About.com: Cruises.

The Aswan – Cairo line – the very last segment of Rhodes's grand envisioned network – is second only to the network's very first segment (Cape Town – Johannesburg) in terms of service offerings. There are a range of passenger services available, ranging from basic economy trains, to luxury tourist-oriented sleeper coaches, traversing the route daily. Although Egypt is currently in the midst of quite dramatic political turmoil (indeed, Egypt's recent military coup and ongoing protests are front-page news as I write this), as far as I know these issues haven't seriously disrupted the nation's train services. Therefore, this segment is marked in green on the map.

I should also note that after Cairo, Egypt's main rail line continues on to Alexandria on the Mediterranean coast. However, of course, the Cairo – Alexandria segment is not marked on the map, because it's a map of the Cape to Cairo railway, not the Cape to Alexandria railway! Also, Cairo could be considered to be "virtually" on the Mediterranean coast anyway, as it's connected by various offshoots of the Nile (in the Nile Delta) to the Mediterranean, with regular maritime traffic along these waterways.

End of the line

Well, there you have it: a thorough exercise in mapping and in narrating the present-day path of the Cape to Cairo Railway.

Personally, I've never been to Africa, let alone travelled any part of this long and diverse route. I'd love to do so, one day: although as I've described above, many parts of the route are currently quite a challenge to travel through, and will probably remain so for the foreseeable future. Naturally, I'd be delighted if anyone who has travelled any part of the route could share their "war stories" as comments here.

One question that I've asked myself many times, while researching and writing this article, is: in what year was the Cape to Cairo Railway at its best? (I.e. in what year was more of the line "green" than in any other year?). It would seem that the answer is probably 1976. This year was certainly not without its problems; but, at least as far as the Cape to Cairo endeavour goes, I believe that it was "as good as it's ever been".

This was the year that the TAZARA opened (which is to this day the "latest piece in the puzzle"), providing its inaugural Zambia – Tanzania service. It was one year before the 1977 dissolution of the East African Railways and Harbours Corporation, which jointly developed and managed the railways of Kenya, Uganda, and Tanzania (EAR&H's peak was probably in 1962, it was already in serious decline by 1976, but nevertheless it continued to provide comprehensive services until its end). And it was a year when Sudan's railways were in better operating condition, that nation being significantly less war-damaged than it is today (although Sudan had already suffered several years of civil war by then).

Unfortunately, it was also a year in which the Rhodesian Bush War was raging intensely – as such, on account of the hostilities between then-Rhodesia and Zambia, the Victoria Falls Bridge was largely closed to all traffic at that time (and, indeed, all travel within then-Rhodesia was probably quite difficult at that time). Then again, this hostility was also the main impetus for the construction of the TAZARA link; so, in the long-term, the tensions in then-Rhodesia actually improved the rail network more than they hampered it.

Additionally, it was a year in which Idi Amin's brutal reign of terror in Uganda was at its height. At that time, travel within Uganda was extremely dangerous, and infrastructure was being destroyed more commonly than it was being maintained.

I'm not the first person to make the observation – a fairly obvious one, after studying the route and its history – that travelling from Cape Town to Cairo overland (by train and/or by other transportation) never has been, and to this day is not, an easy expedition! There are numerous change-overs required, including change of railway guage, change to land vehicle, change to maritime vehicle, and more. The majority of the rail (and other) services along the route are poorly-maintained, prone to breakdowns, almost guaranteed to suffer extensive delays / cancellations, and vulnerable to seasonal weather fluctuations. And – as if all those "regular" hurdles weren't enough – many (perhaps the majority) of the regions through which the route passes are currently, or have in recent history been, unstable and dangerous trouble zones.

Hope you enjoyed my run-down (or should I say run-up?) of the Cape to Cairo Railway. Note that – as well as the KML files, which can be opened in Google Earth for best viewing of the route – the route is available here as a Google map.

Your contribution to the information presented here would be most welcome. If you have experience with path editing / Google Earth / KML (added bonus if you're Git / GitHub savvy, and know how to send pull requests), check out the route KML on GitHub, and feel free to refine it. Otherwise, feel free to post your route corrections, your African railway anecdotes, and all your most scathing criticism, using the comment form below (or contact me directly).

]]>
Money: the ongoing evolution https://greenash.net.au/thoughts/2013/04/money-the-ongoing-evolution/ Wed, 10 Apr 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/04/money-the-ongoing-evolution/ In this article, I'm going to solve all the monetary problems of the modern world.

Oh, you think that's funny? I'm being serious.

Alright, then. I'm going to try and solve them. Money is a concept, a product and a system that's been undergoing constant refinement since the dawn of civilisation; and, as the world's current financial woes are testament to, it's clear that we still haven't gotten it quite right. That's because getting financial systems right is hard. If it were easy, we'd have done it already.

I'm going to start with some background, discussing the basics such as: what is money, and where does it come from? What is credit? What's the history of money, and of credit? How do central banks operate? How do modern currencies attain value? And then I'm going to move on to the fun stuff: what can we do to improve the system? What's the next step in the ongoing evolution of money and finance?

Disclaimer: I am not an economist or a banker; I have no formal education in economics or finance; and I have no work experience in these fields. I'm just a regular bloke, who's been thinking about these big issues, and reading up on a lot of material, and who would like to share his understandings and his conclusions with the world.

Ancient history

Money has been around for a while. When I talk about money, I'm talking cash. The stuff that leaves a smell on your fingers. The stuff that jingles in your pockets. Cold hard cash.

The earliest known example of money dates back to the 7th century BC, when the Lydians minted coins using a natural gold-based alloy called electrum. They were a crude affair – with each coin being of a slightly different shape – but they evolved to become reasonably consistent in their weight in precious metal; and many of them also bore official seals or insignias.

Ancient silver Greek coins.
Ancient silver Greek coins.
Source: Ancient coins.

From Lydia, the phenomenom of minted precious-metal coinage spread: first to her immediate neighbours – the Greek and Persian empires – and then to the rest of the civilised world. By the time the Romans rose to significance, around the 3rd century BC, coinage had become the norm as a medium of exchange; and the Romans established this further with their standard-issue coins, most notably the Denarius, which were easily verifiable and reliable in their precious metal content.

Ten?! Are you trying to insult me?! Me, with a poor dying grandmother?! Ten?!
Ten?! Are you trying to insult me?! Me, with a poor dying grandmother?! Ten?!
Source: London Evening Standard. Quote: Life of Brian haggling scene.

Money, therefore, is nothing new. This should come as no surprise to you.

What may surprise you, however, is that credit existed before the arrival of money. How can that be? I hear you say. Isn't credit – the business of lending, and of recording and repaying a debt – a newer and more advanced concept than money? No! Quite the reverse. In fact, credit is the most fundamental concept of all in the realm of commerce; and historical evidence shows that it was actually established and refined, well before cold hard cash hit the scene. I'll elaborate further when I get on to definitions (next section). For now, just bear with me.

One of the earliest known historical examples of credit – in the form of what essentially amount to "IOU" documents – is from Ancient Babylonia:

… in ancient Babylonia … common commercial documents … are what are called "contract tablets" or "shuhati tablets" … These tablets, the oldest of which were in use from 2000 to 3000 years B. C. are of baked or sun-dried clay … The greater number are simple records of transactions in terms of "she," which is understood by archaeologists to be grain of some sort.

From the frequency with which these tablets have been met with, from the durability of the material of which they are made, from the care with which they were preserved in temples which are known to have served as banks, and more especially from the nature of the inscriptions, it may be judged that they correspond to the medieval tally and to the modern bill of exchange; that is to say, that they are simple acknowledgments of indebtedness given to the seller by the buyer in payment of a purchase, and that they were the common instrument of commerce.

But perhaps a still more convincing proof of their nature is to be found in the fact that some of the tablets are entirely enclosed in tight-fitting clay envelopes or "cases," as they are called, which have to be broken off before the tablet itself can be inspected … The particular significance of these "case tablets" lies in the fact that they were obviously not intended as mere records to remain in the possession of the debtor, but that they were signed and sealed documents, and were issued to the creditor, and no doubt passed from hand to hand like tallies and bills of exchange. When the debt was paid, we are told that it was customary to break the tablet.

We know, of course, hardly anything about the commerce of those far-off days, but what we do know is, that great commerce was carried on and that the transfer of credit from hand to hand and from place to place was as well known to the Babylonians as it is to us. We have the accounts of great merchant or banking firms taking part in state finance and state tax collection, just as the great Genoese and Florentine bankers did in the middle ages, and as our banks do to-day.

Source: What is Money?
Original source: The Banking Law Journal, May 1913, By A. Mitchell Innes.

As the source above mentions (and as it describes in further detail elsewhere), another historical example of credit – as opposed to money – is from medieval Europe, where the split tally stick was commonplace. In particular, in medieval England, the tally stick became a key financial instrument used for taxation and for managing the Crown accounts:

A tally stick is "a long wooden stick used as a receipt." When money was paid in, a stick was inscribed and marked with combinations of notches representing the sum of money paid, the size of the cut corresponding to the size of the sum. The stick was then split in two, the larger piece (the stock) going to the payer, and the smaller piece being kept by the payee. When the books were audited the official would have been able to produce the stick with exactly matched the tip, and the stick was then surrendered to the Exchequer.

Tallies provide the earliest form of bookkeeping. They were used in England by the Royal Exchequer from about the twelfth century onward. Since the notches for the sums were cut right through both pieces and since no stick splits in an even manner, the method was virtually foolproof against forgery. They were used by the sheriff to collect taxes and to remit them to the king. They were also used by private individuals and institutions, to register debts, record fines, collect rents, enter payments for services rendered, and so forth. By the thirteenth century, the financial market for tallies was sufficiently sophisticated that they could be bought, sold, or discounted.

Source: Tally sticks.

Thirteenth century English tally sticks.
Thirteenth century English tally sticks.
Source: The National Archives.

It should be noted that unlike the contract tablets of Babylonia (and the similar relics of other civilisations of that era), the medieval tally stick existed alongside an established metal-coin-based money system. The ancient tablets recorded payments made, or debts owed, in raw goods (e.g. "on this Tuesday, Bishbosh the Great received eight goats from Hammalduck", or "as of this Thursday, Kimtar owes five kwetzelgrams of silver and nine bushels of wheat to Washtawoo"). These societies may have, in reality, recorded most transactions in terms of precious metals (indeed, it's believed that the silver shekel emerged as the standard unit in ancient Mesopotamia); but these units had non-standard shapes and were unsigned, whereas classical coinage was uniform in shape, and possessed insignias.

In medieval England, the common currency was sterling silver, which consisted primarily of silver penny coins (but there were also silver shilling coins, and gold pound coins). The medieval tally sticks recorded payments made, or debts owed, in monetary value (e.g. "on this Monday, Lord Snottyham received one shilling and eight pence from James Yoohooson", or "as of this Wednesday, Lance Alot owes sixpence to Sir Robin").

Definitions

Enough history for now. Let's stop for a minute, and get some basic definitions clear.

First and foremost, the most basic question of all, but one that surprisingly few people have ever actually stopped to think about: what is money?

There are numerous answers:

Money is a medium of exchange.

Source: The Privateer - What is money?

Money itself … is useless until the moment we use it to purchase or invest in something. Although money feels as if it has an objective value, its worth is almost completely subjective.

Source: Forbes - Money! What is it good for?

As with other things, necessity is, indeed, the mother of invention. People needed a formula of stating the standard value of trade goods.

Thus, money was born.

Source: The Daily Bluster - Where did money come from, anyway?

The seller and the depositor alike receive a credit, the one on the official bank and the other direct on the government treasury, The effect is precisely the same in both cases. The coin, the paper certificates, the bank-notes and the credit on the books of the bank, are all indentical in their nature, whatever the difference of form or of intrinsic value. A priceless gem or a worthless bit of paper may equally be a token of debt, so long as the receiver knows what it stands for and the giver acknowledges his obligation to take it back in payment of a debt due.

Money, then, is credit and nothing but credit. A's money is B's debt to him, and when B pays his debt, A's money disappears. This is the whole theory of money.

Source: What is Money?
Original source: The Banking Law Journal, May 1913, By A. Mitchell Innes.

For some, money is a substance in which one may bathe.
For some, money is a substance in which one may bathe.
Image source: DuckTales…Woo-ooo!

I think the first definition is the easiest to understand. Money is a medium of exchange: it has no value in and of itself; but it allows us to more easily exchange between ourselves, things that do have value.

I think the last definition, however, is the most honest. Money is credit: or, to be more correct, money is a type of credit; a credit that is expressed in a uniform, easily quantifiable / divisible / exchangeable unit of measure (as opposed to a credit that's expressed in goats, or in bushels of wheat).

(Note: the idea of money as credit, and of credit as debt, comes from the credit theory of money, which was primarily formulated by Innes (quoted above). This is just one theory of money. It's not the definitive theory of money. However, I tend to agree with the theory's tenets, and various parts of the rest of this article are founded on the theory. Also, it should not be confused with The Theory of Money and Credit, a book from the Austrian School of economics, which asserts that the only true money is commodity money, and which is thus pretty well the opposite extreme from the credit theory of money.)

Which brings us to the next defintion: what is credit?

In the article giving the definition of "money as credit", it's also mentioned that "credit" and "debt" are effectively the same thing; just that the two words represent the two sides of a single relationship / transaction. So, then, perhaps it would make more sense to define what is debt:

Middle English dette: from Old French, based on Latin debitum 'something owed', past participle of debere 'owe'.

A debt is something that one owes; it is one's obligation to give something of value, in return for something that one received.

Conversely, a credit is the fact of one being owed something; it is a promise that one has from another person / entity, that one will be given something of value in the future.

So, then, if we put the two definitions together, we can conclude that: money is nothing more than a promise, from the person / entity who issued the money, that they will give something of value in the future, to the current holder of the money.

Perhaps the simplest to understand example of this, in the modern world, is the gift card typically offered by retailers. A gift card has no value itself: it's nothing more than a promise by the retailer, that they will give the holder of the card a shirt, or a DVD, or a kettle. When the card holder comes into the shop six months later, and says: "I'd like to buy that shirt with this gift card", what he/she really means is: "I have here a written promise from you folks, that you will give me a shirt; I am now collecting what was promised". Once the shirt has been received, the gift card is suddenly worthless, as the documented promise has been fulfilled; this is why, when the retailer reclaims the gift card, they usually just dispose of it.

However, there is one important thing to note: the only value of the gift card, is that it's a promise of being exchangeable for something else; and as long as that promise remains true, the gift card has value. In the case of a gift card, the promise ceases to be true the moment that you receive the shirt; the card itself returns to its original issuer (the retailer), and the story ends there.

Money works the same way, only with one important difference: it's a promise from the government, of being exchangeable for something else; and when you exchange that money with a retailer, in return for a shirt, the promise remains true; so the money still has value. As long as the money continues to be exchanged between regular citizens, the money is not returned to its original issuer, and so the story continues.

So, as with a gift card: the moment that money is returned to its original issuer (the government), that money is suddenly worthless, as the documented promise has been fulfilled. What do we usually return money to the government for? Taxes. What did the government originally promise us, by issuing money to us? That it would take care of us (it doesn't buy us flowers or send us Christmas cards very often; it demonstrates its caring for us mainly with other things, such as education and healthcare). What happens when we pay taxes? The government takes care of us for another year (it's supposed to, anyway). Therefore, the promise ceases to be true; and, believe it or not, the moment that the government reclaims the money in taxes, that money ceases to exist.

The main thing that a government promises, when it issues money, is that it will take care of its citizens; but that's not the only promise of money. Prior to quite recent times, money was based on gold: people used to give their gold to the government, and in return they received money; so, money was a promise that the government would give you back your gold, if you ever wanted to swap again.

In the modern economic system, the governments of the world no longer promise to give you gold (although most governments still have quite a lot of gold, in secret buildings with a lot of fancy locks and many armed guards). Instead, by issuing money these days, a government just promises that its money is worth as much as its economy is worth; this is why governments and citizens the world over are awfully concerned about having a "strong economy". However, what exactly defines "the economy" is rather complicated, and it only gets trickier with every passing year.

So, a very useful side effect of money – as opposed to gift cards – is that as long as the promise of money remains true (i.e. as long as the government keeps taking care of its people, and as long as the economy remains strong), regular people can use whatever money they have left-over (i.e. whatever money doesn't return to the government, at which point it ceases to exist), as a useful medium of exchange in regular day-to-day commerce. But remember: when you exchange your money for a kettle at the shop, this is what happens: at the end of the day, you have a kettle (something of value); and the shop has a promise from the government that it is entitled to something (presumably, something of value).

Recent history

Back to our history class. This time, more recent history. The modern monetary system could be said to have begun in 1694, when the Bank of England was founded. The impetus for establishing it should be familiar to all 21st-century readers: the government of England was deeply in debt; and the Bank was founded in order to acquire a loan of £1.2 million for the Crown. Over the subsequent centuries, it evolved to become the world's first central bank. Also, of great note, this marked the first time in history that a bank (rather than the king) was given the authority to mint new money.

The grand tradition of English banking: The Dawes, Tomes, Mousely, Grubbs, Fidelity Fiduciary Bank.
The grand tradition of English banking: The Dawes, Tomes, Mousely, Grubbs, Fidelity Fiduciary Bank.
Image source: Scene by scene Mary Poppins.

During the 18th and 19th centuries, and also well into the 20th century, the modern monetary system was based on the gold standard. Under this system, countries tied the value of their currency to gold, by guaranteeing to buy and sell gold at a fixed price. As a consequence, the value of a country's currency depended directly on the amount of gold reserves in its possession. Also, consequently, money at that time represented a promise, by the money's issuer, to give an exact quantity of gold to its current holder. This could be seen as a hangover from ancient and medieval times, when money was literally worth the weight of gold (or, more commonly, silver) of which the coins were composed (as discussed above).

During that same time period, the foundation currency – and by far the dominant currency – of the world monetary system was the British Pound. As the world's strongest economy, the world's largest empire (and hence the world's largest trading bloc), and the world's most industrialised nation, all other currencies were valued relative to the Pound. The Pound became the reserve currency of choice for nations worldwide, and most international transactions were denominated with it.

In the aftermath of World War II, the Allies emerged victorious; but the Pound Sterling met its defeat at long last, at the hands of a new world currency: the US Dollar. Because the War had taken place in Europe (and Asia), the financial cost to the European Allied powers was crippling; North America, on the other hand, hadn't witnessed a single enemy soldier set foot on its soil, and so it was that, with the introduction of the Bretton Woods system in 1944, the Greenback rapidly and ruthlessly conquered the world.

The Dollars are Coming!
The Dollars are Coming!
Image source: The Guardian: Reel history.

Under the Bretton Woods system, the gold standard remained in place: the only real difference, was that gold was now spelled with a capital S with a line through it ($), instead of being spelled with a capital L with a line through it (£). The US Dollar replaced the Pound as the dominant world reserve currency and international transaction currency.

The gold standard finally came to an end when, in 1971, President Nixon ended the direct convertibility of US Dollars to gold. Since then, the USD has continued to reign supreme over all other currencies (although it's been increasingly facing competition). However, under the current system, there is no longer a "other currencies -> USD -> gold" pecking order. Theoretically, all currencies are now created equal; and gold is now just one more commodity on the world market, rather than "the shiny stuff that gives money value".

Since the end of Bretton Woods, the world's major currencies exist in a floating exchange rate regime. This means that the only way to measure a given currency's value, is by determining what quantity of another given currency it's worth. Instead of being tied to the value of a real-life object (such as gold), the value of a currency just "floats" up and down, depending on the fluctuations in that country's economy, and depending on the fluctuations in peoples' relative perceptions of its value.

What we have now

The modern monetary system is a complex beast, but at its heart it consists of three players.

The mythical Hydra, a multi-headed monster. Grandaddy of the modern monetary system, perhaps?
The mythical Hydra, a multi-headed monster. Grandaddy of the modern monetary system, perhaps?
Image source: HydraVM.

First, there are the governments of the world. In most countries, there's a department that "represents" the government as a whole, within the monetary system: this is usually called the "Treasury"; it may also be called the Ministry of Finance, among other names. Contrary to what you might think, Treasury does not bring new money into existence (even though Treasury usually governs a country's mint, and thus Treasury is the manufacturer of new physical money).

As discussed in definitions (above), in a "pure" system, money comes into existence when the government issues it (as a promise), and money ceases to exist when the government takes it back (in return for fulfilling a promise). However, in the modern system, the job of bringing new money into existence has been delegated; therefore, money does not cease to exist, the moment that it returns to the government (i.e. the "un-creation" of money has also been delegated).

This delegation allows the government itself to function like any other individual or entity within the system. That is, the government has an "account balance", it receives monetary income (via taxation), it spends money (via its budget program), and it can either be "in the green" or "in the red" (with a strong tendency towards the latter). Thus, the government itself doesn't have to worry too much about the really complicated parts of the modern monetary system; and instead, it can just get on with the job of running the country. The government can also borrow money, to supplement what it receives from taxation; and it can lend money, in addition to its regular spending.

Second, there are these things called "central banks" (also known as "reserve banks", among other names). In a nutshell: the central bank is the entity to which all that stuff I just mentioned gets delegated. The central bank brings new money into existence – officially on behalf of the government; but since the government is usually highly restricted from interfering with the central bank's operation, this is a half-truth at best. It creates new money in a variety of ways. One way – which in practice is usually responsible for only a small fraction of overall money creation, but which I believe is worth focusing on nonetheless – is by buying government (i.e. Treasury) bonds.

Just what is a bond? (Seems we're not yet done with definitions, after all.) A bond is a type of debt (or a type of credit, depending on your perspective). A lends money to B, and in return, B gives A bonds. The bonds are a promise that the debt will be repaid, according to various terms (time period, interest payable, etc). So, bonds themselves have no value: they're just a promise that the holder of the bonds will receive something of value, at some point in the future. In the case of government bonds, the bonds are a promise that the government will provide something of value to their current holder.

But, hang on… isn't that also what money is? A promise that the government will provide something of value to the current holder of the money? So, let me get this straight: the Treasury writes a document (bonds) saying "The government (on behalf of the Treasury) promises to give the holder of this document something of value", and gives it to the central bank; and in return, the central bank writes a document (money) also saying "The government (on behalf of the central bank) promises to give the holder of this document something of value", and gives it to the Treasury; and at the end of the day, the government has more money? Or, in other words (no less tangled): the government lends itself money, and money is also itself a government loan? Ummm… WTF?!

Glad I'm not the only one that sees a slight problem here.
Glad I'm not the only one that sees a slight problem here.
Image source: Lol Zone.

Third, there are the commercial banks. The main role of these (private) companies is to safeguard the deposits of, and provide loans to, the general public. The main (original) source of commercial banks' money, is from the deposits of their customers. However, thanks to the practice of fractional reserve banking that's prevalent in the modern monetary system, commercial banks are also responsible for about 95% of the money creation that occurs today; almost all of this private-bank-created money is interest (and principal) from loans. So, yes: money is created out of thin air; and, yes, the majority of money is not created by the government (either on behalf of Treasury or the central bank), but by commercial banks. No surprise, then, that about 97% of the world's money exists only electronically in commercial bank accounts (with physical cash making up the other 3%).

This presents another interesting conundrum: all money supposedly comes from the government, and is supposedly a promise from the government that they will provide something of value; but in today's reality, most of our money wasn't created by the government, it was created by commercial banks! So, then: if I have $100 in my bank account, does that money represent a promise from the government, or a promise from the commercial banks? And if it's a promise from the commercial banks… what are they promising? Beats me. As far as I know, commercial banks don't promise to take care of society; they don't promise to exchange money for gold; I suppose the only possibility is that, much as the government promises that money is worth as much as the nation's economy is worth, commercial banks promise that money is worth as much as they are worth.

And what are commercial banks worth? A lot of money (and not much else), I suppose… which starts taking us round in circles.

I should also mention here, that the central banks' favourite and most oft-used tool in controlling the creation of money, is not the buying or selling of bonds; it's something else that we hear about all the time in the news: the raising or lowering of official interest rates. Now that I've discussed how 95% of money creation occurs via the creation of loans and interest within commercial banks, it should be clear why interest rates are given such importance by government and by the media. The central bank only sets the "official" interest rate, which is merely a guide for commercial banks to follow; but in practice, commercial banks adjust their actual interest rates to closely match the official one. So, in case you had any lingering doubts: the central banks and the commercial banks are, of course, all "in on it" together.

Oh yeah, I almost forgot… and then there are regular people. Just trying to eke out a living, doing whatever's necessary to bring home the dough, and in general trying to enjoy life, despite the best efforts of the multi-headed beast mentioned above. But they're not so important; in fact, they hardly count at all.

In summary: today's system is very big and complex, but for the most part it works. Somehow. Sort of.

Broken promises

In case you haven't worked it out yet: money is debt; debt is credit; and credit is promises.

Bankers, like politicians, are big on promises. In fact, bankers are full of promises (by definition, since they're full of money). And, also like politicians, bankers are good at breaking promises.

Or, to phrase it more accurately: bankers are good at convincing you to make promises (i.e. to take out a loan); and they're good at promising you that you'll have no problem in not breaking your promises (i.e. in paying back the loan); and they're good at promising you that making and not breaking your promises will be really worthwhile for you (i.e. you'll get a return on your loan); and (their favourite part) they're exceedingly good at holding you to your promises, and at taking you to the dry cleaners in the event that you are truly unable to fulfil your promises.

Since money is debt, and since money makes the world go round, the fact that the world is full of debt really shouldn't make anyone raise an eyebrow. What this really means, is that the world is full of promises. This isn't necessarily a bad thing, assuming that the promises being made are fair. In general, however, they are grossly unfair.

Let's take a typical business loan as an example. Let's say that Norbert wants to open a biscuit shop. He doesn't have enough money to get started, so he asks the bank for a loan. The bank lends Norbert a sum of money, with a total repayment over 10 years of double the value of the sum being lent (as is the norm). Norbert uses the money to buy a cash register, biscuit tins, and biscuits, and to rent a suitable shop venue.

There are two possibilities for Norbert. First, he generates sufficient business selling biscuits to pay off the loan (which includes rewarding the bank with interest payments that are worth as much as it cost him to start the business), and he goes on selling biscuits happily ever after. Second, he fails to bring in enough revenue from the biscuit enterprise to pay off the loan, in which case the bank seizes all of his business-related assets, and he's left with nothing. If he's lucky, Norbert can go back to his old job as a biscuit-shop sales assistant.

What did Norbert input, in order to get the business started? All his time and energy, for a sustained period. What was the real cost of this input? Very high: Norbert's time and energy is a tangible asset, which he could have invested elsewhere had he chosen (e.g. in building a giant Lego elephant). And what is the risk to Norbert? Very high: if business goes bad (and the biscuit market can get volatile at times), he loses everything.

What did the bank input, in order to get the business started? Money. What was the real cost of this input? Nothing: the bank pulled the money out of thin air in order to lend it to Norbert; apart from some administrative procedures, the bank effectively spent nothing. And what is the risk to the bank? None: if business goes well, they get back double the money that they lent Norbert (which was fabricated the moment that the loan was approved anyway); if business goes bad, they seize all Norbert's business-related assets (biscuit tins and biscuits are tangible assets), and as for the money… well, they just fabricated it in the first place anyway, didn't they?

Broke(n) nations

One theme that I haven't touched on specifically so far, is the foreign currency exchange system. However, I've already explained that money is worth as much as a nation's economy is worth; so, logically, the stronger a nation's economy is, the more that nation's money is worth. This is the essence of foreign currency exchange mechanics. Here's a formula that I just invented, but that I believe is reasonably accurate, for determining the exchange rate r between two given currencies a and b:

My unofficial exchange rate formula.
My unofficial exchange rate formula.
That's: ra:b = (sa ÷ qa) : (qb ÷ sb)

Where sx is the strength of the given economy, and qx is the quantity of the given currency in existence.

So, for example, say we want to determine the exchange rate of US Dollars to Molvanîan Strubls. Let's assume that the US economy is worth "1,000,000" (which is good), and that there are 1,000,000 US Dollars (a) in existence; and let's assume that the Molvanîan economy is worth "100" (which is not so good), and that there are 1,000,000,000 Molvanîan Strubls (b) in existence. Substituting values into the formula, we get:

ra:b = (1,000,000 ÷ 1,000,000 USD) : (1,000,000,000 Strubls ÷ 100)

ra:b = 1 USD : 10,000,000 Strubls

This, in my opinion, should be sufficient demonstration of why the currencies of strong economies have value, and why people the world over like getting their hands dirty with them; and why the currencies of weak economies lack value, and why their only practical use is for cleaning certain dirty orifices of one's body.

Or, for a real-world example of a currency worth less than its weight in toilet paper, see Zimbabwe.
Or, for a real-world example of a currency worth less than its weight in toilet paper, see Zimbabwe.
Image source: praag.org.

Now, getting back to the topic of lending money. Above, I discussed how banks lend money to individuals. As it turns out, banks also lend money to foreign countries. Either commercial banks, central banks, or an international bank (such as the IMF), doesn't matter in this context. And, either foreign individuals, foreign companies, or foreign governments, doesn't matter either in this context. The point is: there are folks whose local currency isn't accepted worldwide (if it's even accepted locally), and who need to purchase goods and services from the world market; and so, these folks ask for a loan from banks elsewhere, who are able to lend them money in a strong currency.

The example scenario that I described above (Norbert), applies equally here. Only this time, Norbert is a group of people from a developing country (let's call them The Morbert Group), and the bank is a corporation from a developed country. As in Norbert's case, The Morbert Group input a lot of time and effort to start a new business; and the bank input money that it pulled out of thin air. And, as in Norbert's case, The Morbert Group has a high risk of losing everything, and at the very least is required to pay an exorbitant amount of interest on its loan; whereas the bank has virtually no risk of losing anything, as it's a case of "the house always wins".

So, the injustice of grossly unfair and oft-broken promises between banks and society doesn't just occur within a single national economy, it occurs on a worldwide scale within today's globalised economy. Yes, the bank is the house; and yes (aside from a few hiccups), the house just keeps winning and winning. This is how, in the modern monetary system, a nation's rich people keep getting richer while its poor people keep getting poorer; and it's how the world's rich countries keep getting richer, while the poor countries keep getting poorer.

Serious problems

Don't ask "how did it come to this?" I've just spent a large number of words explaining how it's come to this (see everything above). Face the facts: it has come to this. The modern monetary system has some very serious problems. Here's my summary of what I think those problems are:

  • Currency inequality promotes poverty. In my opinion, this is the worst and the most disgraceful of all our problems. A currency is only worth as much as its nation's economy is worth. This is wrong. It means that the people who are issued that currency, participate in the global economy with a giant handicap. It's not their fault that they were born in a country with a weaker economy. Instead, a currency should be worth as much as its nation's people are worth. And all people in all countries are "worth" the same amount (or, at least, they should be).
  • Governments manipulate currency for their own purposes. All (widely-accepted) currency in the world today is issued by governments, and is controlled by the world's central banks. While many argue that the tools used to manipulate the value of currency – such as adjusting interest rates, and trading in bonds – are "essential" for "stabilising" the economy, it's clear that very often, governments and/or banks abuse these tools in the pursuit of more questionable goals. Governments and central banks (particularly those in "strong" countries, such as the US) shouldn't have the level of control that they do over the global financial system.
  • Almost all new money is created by commercial banks. The creation of new money should not be entrusted to a handful of privileged companies around the world. These companies continue to simply amass ever-greater quantities of money, further promoting poverty and injustice. Money creation should be more fairly distributed between all individuals and between all nations.
  • It's no longer clear what gives money its value. In the olden days, money was "backed" by gold. Under the current system, money is supposedly backed by the value of the issuing country's economy. However, the majority of new money today is created by commercial banks, so it's unclear if that's really true or not. Perhaps a new way definition of what "backs" money is needed?

Now, at long last – after much discussion of promises made and promises broken – it's time to fulfil the promise that I made at the start of this article. Time to solve all the monetary problems of the modern world!

Possible solutions

One alternative to the modern monetary system, and its fiat money roots (i.e. money "backed by nothing"), is a return to the gold standard. This is actually one of the more popular alternatives, with many arguing that it worked for thousands of years, and that it's only for the past 40-odd years (i.e. since the Nixon Shock in 1971) that we've been experimenting with the current (broken) system.

This is a very conservative argument. The advocates of "bringing back the gold standard" are heavily criticised by the wider community of economists, for failing to address the issues that caused the gold standard to be dropped in the first place. In particular, the critics point out that the modern world economy has been growing much faster than the world's supply of gold has been growing, and that there literally isn't enough physical gold available, for it to serve as the foundation of the modern monetary system.

Personally, I take the critics' side: the gold standard worked up until modern times; but gold is a finite resource, and it has certain physical characteristics that limit its practical use (e.g. it's quite heavy, it's not easily divisible into sufficiently small parts, etc). Gold will always be a valuable commodity – and, as the current economic crisis shows, people will always turn to gold when they lose confidence in even the most stable of regular currencies – but its days as the foundation of currency were terminated for a reason, and so I don't think it's altogether bad that we relegate the gold standard to the annals of history.

How about getting rid of money altogether? For virtually as long as money has existed, it's been often labelled "the root of all evil". The most obvious solution to the world's money problems, therefore, is one that's commonly proposed: "let's just eliminate money." This has been the cry of hippies, of communists, of utopianists, of futurists, and of many others.

Imagine no possessions... I wonder if you can.
Imagine no possessions... I wonder if you can.
Image source: Etsy.

Unfortunately, the most prominent example so far in world history of (effectively) eliminating money – 20th century communism – was also an economic disaster. In the Soviet Union, although there was money, the price of all basic goods and services was fixed, and everything was centrally distributed; so money was, in effect, little more than a rationing token. Hence the famous Russian joke: "We pretend to work, They pretend to pay us".

Utopian science fiction is also rife with examples of a future without money. The best-known and best-developed example is Star Trek (an example with which I'm also personally well-acquainted). In the Star Trek universe, where virtually all of humanity's basic needs (i.e. food, clothing, shelter, education, medicine) are provided for in limitless supply by modern technology, "the economics of the future are somewhat different". As Captain Picard says in First Contact: "The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves and the rest of humanity." This is a great idea in principle; but Star Trek also fails to address the practical issues of such a system, any better than contemporary communist theory does.

Star Trek IV:
Star Trek IV: "They're still using money. We need to get some."
Image source: Moar Powah.

While I'm strongly of the opinion that our current monetary system needs reform, I don't think that abolishing the use of money is: (a) practical (assuming that we want trade and market systems to continue existing in some form); or (b) going to actually address the issues of inequality, corruption, and systemic instability that we'd all like to see improved. Abolishing money altogether is not practical, because we do require some medium of exchange in order for the civilised world (which has always been built on trade) to function; and it's not going to address the core issues, because money is not the root of all evil, money is just a tool which can be used for either good or bad purposes (the same as a hammer can be used to build a house or to knock someone on the head – the hammer itself is "neutral"). The problem is not money; the problem is greed.

For a very different sci-fi take on the future of money, check out the movie In Time (2011). In this dystopian work, there is a new worldwide currency: time. Every human being is born with a "biological watch", that shows on his/her forearm how much time he/she has left to live. People can earn time, trade with time, steal time, donate time, and store time (in time banks). If you "time out" (i.e. run out of time), you die instantly.

In Time: you're only worth as many seconds as you have left to live.
In Time: you're only worth as many seconds as you have left to live.
Image source: MyMovie Critic.

The monetary system presented by In Time is interesting, because it's actually very stable (i.e. the value of "time" is very clear, and time as a currency is quite resilient to inflation / deflation, speculation, etc), and it's a currency that's "backed" by a real commodity (i.e. time left alive; commodities don't get much more vital). However, the system also has gross potential for inequality and corruption – and indeed, in the movie, it's clearly demonstrated that everyone could live indefinitely if the banks just kept rewarding infinte quantities of time; but instead. time is meagerly rationed out by the rich and powerful elite (who can create more time out of thin air whenever they want, much as today's elite do with money), in order to enforce a status quo upon the impoverished masses.

One of the most concerted efforts that has been made in recent times, to disrupt (and potentially revolutionise) the contemporary monetary system, is the much-publicised Bitcoin project. Bitcoin is a virtual currency, which isn't issued or backed by any national government (or by any official organisation at all, for that matter), but which is engineered to mimic many of the key characteristics of gold. In particular, there's a finite supply of Bitcoins; and new Bitcoins can only be created by "mining" them.

Bitcoin makes no secret of the fact that it aims to become a new global world currency, and to bring about the demise of traditional government-issued currency. As I've already stated here, I'm in favour of replacing the current world currencies; and I applaud Bitcoin's pioneering endeavours to do this. Bitcoin sports the key property that I think any contender to the "brave new world of money" would need: it's not generated by central banks, nor by any other traditional contemporary authority. However, there are a number of serious flaws in the Bitcoin model, which (in my opinion) mean that Bitcoin cannot and (more importantly) should not ever achieve this.

Most importantly, Bitcoin fails to adequately address the issue of "money creation should be fairly distributed between all". In the Bitcoin model, money creation is in the hands of those who succeed in "mining" new Bitcoins; and "mining" Bitcoins consists of solving computationally expensive cryptographic calculations, using the most powerful computer hardware possible. So, much as Bitcoin shares many of gold's advantages, it also shares many of its flaws. Much as gold mining unfairly favours those who discover the gold-hills first, and thereafter favours those with the biggest drills and the most grunt; so too does Bitcoin unfairly favour those who knew about Bitcoin from the start, and thereafter favour those with the beefiest and best-engineered hardware.

Mining: a dirty business that rewards the boys with the biggest toys.
Mining: a dirty business that rewards the boys with the biggest toys.
Image source: adelaidenow.

Bitcoin also fails to address the issue of "what gives money its value". In fact, "what gives Bitcoin its value" is even less clear than "what gives contemporary fiat money its value". What "backs" Bitcoin? Not gold. Not any banks. Not any governments or economies. Supposedly, Bitcoin "is" the virtual equivalent of gold; but then again (as others have stated), I'll believe that the day I'm shown how to convert digital Bitcoins into physical metal chunks that are measured in Troy ounces. It's also not clear if Bitcoin is a commodity or a currency (or both, or neither); and if it's a commodity, it's not completely clear how it would succeed as the foundation of the world monetary system, where gold failed.

Plus, assuming that Bitcoin is the virtual equivalent of gold, the fact that it's virtual (i.e. technology-dependent for its very existence) is itself a massive disadvantage, compared to a physical commodity. What happens if the Internet goes down? What happens if there's a power failure? What happens if the world runs out of computer hardware? Bye-bye Bitcoin. What happens to gold (or physical fiat money) in any of these cases? Nothing.

Additionally, there's also significant doubt and uncertainty over the credibility of Bitcoin, meaning that it fails to address the issue of "manipulation of currency [by its issuers] for their own purposes". In particular, many have accused Bitcoin of being a giant scam in the form of a Ponzi scheme, which will ultimately crash and burn, but not before the system's founders and earliest adopters "jump ship" and take a fortune with them. The fact that Bitcoin's inventor goes by the fake name "Satoshi Nakamoto", and has disappeared from the Bitcoin community (and kept his true identity a complete mystery) ever since, hardly enhances Bitcoin's reputation.

This article is not about Bitcoin; I'm just presenting Bitcoin here, as one of the recently-proposed solutions to the problems of the world monetary system. I've heavily criticised Bitcoin here, to the point that I've claimed it's not suitable as the foundation of a new world monetary system. However, let me emphasise that I also really admire the positive characteristics of Bitcoin, which are numerous; and I hope that one day, a newer incarnation is born that borrows these positive characteristics of Bitcoin, while also addressing Bitcoin's flaws (and we owe our thanks to Bitcoin's creator(s), for leaving us an open-source system that's unencumbered by copyright, patents, etc). Indeed, I'd say that just as non-virtual money has undergone numerous evolutions throughout history (not necessarily with each new evolution being "better" than its predecessors); so too will virtual currency undergo numerous evolutions (hopefully with each new evolution being "better"). Bitcoin is only the beginning.

My humble proposal

The solution that I'd like to propose, is a hybrid of various properties of what's been explored already. However, the fundamental tenet of my solution, is something that I haven't discussed at all so far, and it is as follows:

Every human being in the world automatically receives an "allowance", all the time, all their life. This "allowance" could be thought of as a "global minimum wage"; although everyone receives it regardless of, and separate to, their income from work and investments. The allowance could be received once a second, or once a day, or once a month – doesn't really matter; I guess that's more a practical question of the trade-off in: "the more frequent the allowance, the more overhead involved; the less frequent the allowance, the less accurate the system is."

Ideally, the introduction of this allowance would be accompanied by the introduction of a new currency; and this allowance would be the only permitted manner in which new units of the currency are brought into existence. That is, new units of the currency cannot be generated ad lib by central banks or by any other organisation (and it would be literally impossible to circumvent this, a la Bitcoin, thus making the currency a commodity rather than a fiat entity). However, a new currency is not the essential idea – the global allowance per person is the core – and it could be done with one or more existing currencies, although this would obviously have disadvantages.

The new currency for distributing the allowance would also ideally exist primarily in digital form. It would be great if, unlike Bitcoin and its contemporaries, the currency could also exist in a physical commodity form, with an easy way of transforming the currency between digital and physical form, and vice versa. This would require technology that doesn't currently exist – or, at the least, some very clever engineering with the use of current technology – and is more "wishful thinking" at this stage. Additionally, the currency could also exist as an "account balance" genetically / biologically stored within each person, much like in the movie In Time; except that you don't die if you run out of money (you just ain't got no money). However, all of this is non-essential bells and whistles, supplementing my core proposal.

There are a number of other implementation details, that I don't think particularly need to all be addressed at the conceptual level, but that would be significant at the practical level. For example: should the currency be completely "tamper-proof", or should there be some new international body that could modify various parameters (e.g. change the amount of the allowance)? And should the allowance be exactly the same for everyone, or should it vary according to age, physical location, etc? Personally, I'd opt for a completely "tamper-proof" currency, and for a completely standard allowance; but other opinions may differ.

Taxation would operate in much the same way as it does now (i.e. a government's primary source of revenue, would be taxing the income of its citizens); however, the wealth difference between countries would reduce significantly, because at a minimum, every country would receive revenue purely based on its population.

A global allowance (issued in the form of a global currency), doesn't necessarily mean a global government (although the two would certainly function much better together). It also doesn't necessarily mean the end of national currencies; although national currencies would probably, in the long run, struggle to compete for value / relevance with a successful global currency, and would die out.

If there's a global currency, and a global allowance for everyone on the planet, but still individual national governments (some of which would be much poorer and less developed than others), then taxation would still be at the discretion of each nation. Quite possibly, all nations would end up taxing 100% of the allowance that their citizens receive (and corrupt third-world nations would definitely do this); in which case it would not actually be an allowance for individuals, but just a way of enforcing more economic equality between countries based on population.

However, this doesn't necessarily make the whole scheme pointless. If a developing country receives the same revenue from its population's global allowance, as a developed country does (due to similar population sizes), then: the developing country would be able to compete more fairly in world trade; it would be able to attract more investment; and it wouldn't have to ask for loans and to be indebted to wealthier countries.

So, with such a system, the generated currency wouldn't be backed by anything (no more than Bitcoin is backed by anything) – but it wouldn't be fiat, either; it would be a commodity. In effect, people would be the underlying commodity. This is a radical new approach to money. It would also have potential for corruption (e.g. it could lead to countries kidnapping / enslaving each others' populations, in order to steal the commodity value of another country). However, appropriate practical safeguards in the measuring of a country's population, and in the actual distribution of new units of the currency, should be able to prevent this.

It's not absolutely necessary that a new global currency is created, in order to implement the "money for all countries based on population" idea: all countries could just be authorised to mint / receive a quantity of an existing major currency (e.g. USD or EUR) proportional to its population. However, this would be more prone to corruption, and would lack the other advantages of a new global currency (i.e. not backed by any one country, not produced by any central bank).

It has been suggested that a population-based currency is doomed to failure:

So here is were [sic] you need to specify. What happens at round two?

If you do nothing things would go back to how they are now. The rich countries would have the biggest supply of this universal currency (and the most buying power) and the poor countries would loose [sic] their buying power after this one explosive buying spree.

And things would go back to exactly as they are now- no disaster-but no improvemen [sic] to anything.

But if youre [sic] proposing to maintain this condition of equal per capita money supply for every nation then you have to rectify the tendency of the universal currency to flow back to the high productivity countries. Buck the trend of the market somehow.

Dont [sic] know how you would do it. And if you were able to do it I would think that it would cause severe havoc to the world economy.It would amount to very severe global income redistribution.

The poor countries could use this lopsided monetary system to their long term advantage by abstaining from buying consumer goods and going full throttle into buying capital goods from industrialized world to industrialize their own countries. In the long term that would be good for them and for the rich countries as well.

So it could be viewed as a radical form of foreign aid. But its [sic] a little too radical.

Source: Wrong Planet.

Alright: so once you get to "round two" in this system, the developed countries once again have more money than the developing countries. However, that isn't any worse than what we've got at the moment.

And, alright: so this system would effectively amount to little more than a radical form of foreign aid. But why "too radical"? In my opinion, the way the world currently manages foreign aid is not working (as evidenced by the fact that the gap between the world's richest and poorest nations is increasing, not decreasing). The world needs a radical form of foreign aid. So, in my opinion: if the net result of a global currency whose creation and distribution is tied to national populations, is a radical form of foreign aid; then, surely what would be a good system!

Conclusion

So, there you have it. A monster of an article examining the entire history of money, exploring the many problems with the current world monetary system, and proposing a humble solution, which isn't necessarily a good solution (in fact, it isn't even necessarily better than, or as good as, the other possible solutions that are presented here), but which at least takes a shot at tackling this age-old dilemna. Money: can't live with it; can't live without it.

Money has already been a work-in-progress for about 5,000 years; and I'm glad to see that at this very moment, efforts are being actively made to continue refining that work-in-progress. I think that, regardless of what theory of money one subscribes to (e.g. money as credit, money as a commodity, etc), one could describe money as being the "grease" in the global trade machine, and actual goods and services as being the "cogs and gears" in the machine. That is: money isn't the machine itself, and the machine itself is more important than money; but then again, the machine doesn't function without money; and the better the money works, the better the machine works.

So, considering that trade is the foundation of our civilised existence… let's keep refining money. There's still plenty of room for improvement.

]]>
Configuring Silex (Symfony2) and Monolog to email errors https://greenash.net.au/thoughts/2013/03/configuring-silex-symfony2-and-monolog-to-email-errors/ Sat, 30 Mar 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/03/configuring-silex-symfony2-and-monolog-to-email-errors/ There's a pretty good documentation page on how to configure Monolog to email errors in Symfony2. This, and all other documentation that I could find on the subject, works great if: (a) you're using the Symfony2 Standard Edition; and (b) you want to send emails with Swift Mailer. However, I couldn't find anything for my use case, in which: (a) I'm using Silex; and (b) I want to send mail with PHP's native mail handler (Swift Mailer is overkill for me).

Turns out that, after a bit of digging and poking around, it's not so hard to cobble together a solution that meets this use case. I'm sharing it here, in case anyone else finds themselves with similar needs in the future.

The code

Assuming that you've installed both Silex and Monolog (by adding silex/silex and monolog/monolog to the require section of your composer.json file, or by some alternate install method), you'll need something like this for your app's bootstrap code (in my case, it's in my project/app.php file):

<?php

/**
 * @file
 * Bootstraps this Silex application.
 */

$loader = require_once __DIR__ . '/../vendor/autoload.php';

$app = new Silex\Application();

function get_app_env() {
  $gethostname_result = gethostname();

  $gethostname_map = array(
    'prodservername' => 'prod',
    'stagingservername' => 'staging',
  );

  $is_hostname_mapped = !empty($gethostname_result) &&
                        isset($gethostname_map[$gethostname_result]);

  return $is_hostname_mapped ? $gethostname_map[$gethostname_result]
                             : 'dev';
}

$app['env'] = get_app_env();

$app['debug'] = $app['env'] == 'dev';

$app['email.default_to'] = array(
  'Dev Dude <dev.dude@nonexistentemailaddress.com>',
  'Manager Dude <manager.dude@nonexistentemailaddress.com>',
);

$app['email.default_subject'] = '[My App] Error report';

$app['email.default_from'] =
  'My App <my.app@nonexistentemailaddress.com>';

$app->register(new Silex\Provider\MonologServiceProvider(), array(
  'monolog.logfile' =>  __DIR__ . '/../log/' . $app['env'] . '.log',
  'monolog.name' => 'myapp',
));

$app['monolog'] = $app->share($app->extend('monolog',
function($monolog, $app) {
  if (!$app['debug']) {
    $monolog->pushHandler(new Monolog\Handler\NativeMailerHandler(
      $app['email.default_to'],
      $app['email.default_subject'],
      $app['email.default_from'],
      Monolog\Logger::CRITICAL
    ));
  }

  return $monolog;
}));

return $app;

I've got some code here for determining the current environment (which can be prod, staging or dev), and for only enabling the error emailing functionality for environments other than dev. Up to you whether you want / need that functionality; plus, this example is just one of many possible ways to implement it.

I followed the Silex docs for customising Monolog by adding extra handlers, which is actually very easy to use, although it's lacking any documented examples.

That's about it, really. Using this code, you can have a Silex app which logs errors to a file (the usual) when running in your dev environment, but that also sends an error email to one or more addresses, when running in your other environments. Not rocket science – but, in my opinion, it's an important setup to be able to achieve in pretty much any web framework (i.e. regardless of your technology stack, receiving email notification of critical errors is a recommended best practice); and it doesn't seem to be documented anywhere so far for Silex.

]]>
Show a video's duration with Media: YouTube and Computed Field https://greenash.net.au/thoughts/2013/03/show-a-videos-duration-with-media-youtube-and-computed-field/ Thu, 28 Mar 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/03/show-a-videos-duration-with-media-youtube-and-computed-field/ I build quite a few Drupal sites that use embedded YouTube videos, and my module of choice for handling this is Media: YouTube, which is built upon the popular Media module. The Media: YouTube module generally works great; but on one site that I recently built, I discovered one of its shortcomings. It doesn't let you display a YouTube video's duration.

I thought up a quick, performant and relatively easy way to solve this. With just a few snippets of custom code, and the help of the Computed Field module, showing video duration (in hours / minutes / seconds) for a Media: YouTube managed asset, is a walk in the park.

Getting set up

First up, install the Media: YouTube module (and its dependent modules) on a Drupal 7 site of your choice. Then, add a YouTube video field to one of the site's content types. For this example, I added a field called 'Video' (field_video) to my content type 'Page' (page). Be sure to select a 'field type' of 'File', and a 'widget' of type 'Media file selector'. In the field settings, set 'Allowed remote media types' to just 'Video', and set 'Allowed URI schemes' to just 'youtube://'.

To configure video display, go to 'Administration > Configuration > Media > File types' in your site admin, and for 'Video', click on 'manage file display'. You should be on the 'default' tab. For 'Enabled displays', enable just 'YouTube Video'. Customise the other display settings to your tastes.

Add a YouTube video to one of your site's pages. For this example, I've chosen one of the many clips highlighting YouTube's role as the zenith of modern society's intellectual capacity: a dancing duck.

To show the video within your site's theme, open up your theme's template.php file, and add the following preprocess function (in this example, my theme is called foobar):

<?php
/**
 * Preprocessor for node.tpl.php template file.
 */
function foobar_preprocess_node(&$vars) {
  if ($vars['node']->type == 'page' &&
      !empty($vars['node']->field_video['und'][0]['fid'])) {
    $video_file = file_load($vars['node']->field_video['und'][0]['fid']);
    $vf = file_view_file($video_file, 'default', '');
    $vars['video'] = drupal_render($vf);
  }
}

And add the following snippet to your node.tpl.php file or equivalent (in my case, I added it to my node--page.tpl.php file):

<!-- template stuff bla bla bla -->

<?php if (!empty($video)): ?>
  <?php print $video; ?>
<?php endif; ?>

<!-- more template stuff bla bla bla -->

The duck should now be dancing for you:

Embrace The Duck.
Embrace The Duck.

Getting the duration

On most sites, you won't have any need to retrieve and display the video's duration by itself. As you can see, the embedded YouTube element shows the duration pretty clearly, and that's adequate for most use cases. However, if your client wants the duration shown elsewhere (other than within the embedded video area), or if you're just in the mood for putting the duration between a spantabulously vomitive pair of <font color="pink"><blink>2:48</blink></font> tags, then keep reading.

Unfortunately, the Media: YouTube module doesn't provide any functionality whatsoever for getting a video's duration (or much other video metadata, for that matter). But, have no fear, it turns out that a quick code snippet for querying a YouTube video's duration, based on video ID, is pretty quick and painless in bare-bones PHP. Add this to a custom module on your site (in my case, I added it to my foobar_page.module):

<?php
/**
 * Gets a YouTube video's duration, based on video ID.
 *
 * Copied (almost exactly) from:
 * http://stackoverflow.com/questions/9167442/
 * get-duration-from-a-youtube-url/9167754#9167754
 *
 * @param $video_id
 *   YouTube video ID.
 *
 * @return
 *   Video duration (or FALSE on failure).
 */
function foobar_page_get_youtube_video_duration($video_id) {
  $data = @file_get_contents('http://gdata.youtube.com/feeds/api/videos/'
  . $video_id . '?v=2&alt=jsonc');
  if ($data === FALSE) {
    return FALSE;
  }

  $obj = json_decode($data);
  return $obj->data->duration;
}

Great – turns out that querying the YouTube API for the duration is very easy. But we don't want to perform an external HTTP request, every time we want to display a video's duration: that would be a potential performance issue (and, in the event that YouTube is slow or unavailable, it would completely hang the page loading process). What we should do instead, is only query the duration from YouTube when we save a node (or other entity), and then store the duration locally for easy retrieval later.

Storing the duration

There are a number of possibilities, for how to store this data. Using Drupal's variable_get() and variable_set() functionality is one option (with either one variable per duration value, or with all duration values stored in a single serialized variable). However, that has numerous disadvantages: it would negatively affect performance (both for retrieving duration values, and for the whole Drupal site); and, at the end of the day, it's an abuse of the Drupal variable system, which is only meant to be used for one-off values, not for values that are potentially set for every node on your site (sadly, it would be far from the first such case of abuse of the Drupal variable system – but the fact that other people / other modules do it, doesn't make it any less dodgy).

Patching the Media: YouTube module to have an extra database field for video duration, and making the module retrieve and store this value, would be another option. However, that would be a lot more work and a lot more code; it would also mean having a hacked version of the module, until (if and when) a patch for the module (that we'd have to submit and refine) gets committed on drupal.org. Plus, it would mean learning a whole lot more about the Field API, the Media module, and the File API than any sane person would care to subject his/her self to.

Enter the Computed Field module. With the help of this handy module, we have the possibility of implementing a better, faster, nicer solution.

Add this to a custom module on your site (in my case, I added it to my foobar_page.module):

<?php
/**
 * Computed field callback.
 */
function computed_field_field_video_duration_compute(
&$entity_field, $entity_type, $entity,
$field, $instance, $langcode, $items) {
  if (!empty($entity->nid) && $entity->type == 'page' &&
      !empty($entity->field_video['und'][0]['fid'])) {
    $video_file = file_load($entity->field_video['und'][0]['fid']);
    if (!empty($video_file->uri) &&
        preg_match('/^youtube\:\/\/v\/.+$/', $video_file->uri)) {
      $video_id = str_replace('youtube://v/', '', $video_file->uri);
      $duration = foobar_page_get_youtube_video_duration($video_id);

      if (!empty($duration)) {
        $entity_field[0]['value'] = $duration;
      }
    }
  }
}

Next, install the Computed Field module on your Drupal site. Add a new field to your content type, called 'Video duration' (field_video_duration), with 'field type' and 'widget' of type 'Computed'. On the settings page for this field, you should see the message: "This field is COMPUTED using computed_field_field_video_duration_compute()". In the 'database storage settings', ensure that 'Data type' is 'text', and that 'Data length' is '255'. You can leave all other settings for this field at their defaults.

Re-save the node that has YouTube video content, in order to retrieve and save the new computed field value for the duration.

Displaying the duration

For the formatting of the duration (the raw value of which is stored in seconds), in hours:minutes:seconds format, here's a dodgy custom function that I whipped up. Use it, or don't – totally your choice. If you choose to use, then add this to a custom module on your site:

<?php
/**
 * Formats the given time value in h:mm:ss format (if it's >= 1 hour),
 * or in mm:ss format (if it's < 1 hour).
 *
 * Based on Drupal's format_interval() function.
 *
 * @param $interval
 *   Time interval (in seconds).
 *
 * @return
 *   Formatted time value.
 */
function foobar_page_format_time_interval($interval) {
  $units = array(
    array('format' => '%d', 'value' => 3600),
    array('format' => '%d', 'value' => 60),
    array('format' => '%02d', 'value' => 1),
  );

  $granularity = count($units);
  $output = '';
  $has_value = FALSE;
  $i = 0;

  foreach ($units as $unit) {
    $format = $unit['format'];
    $value = $unit['value'];
    $new_val = floor($interval / $value);
    $new_val_formatted = ($output !== '' ? ':' : '') .
                         sprintf($format, $new_val);
    if ((!$new_val && $i) || $new_val) {
      $output .= $new_val_formatted;

      if ($new_val) {
        $has_value = TRUE;
      }
    }

    if ($interval >= $value && $has_value) {
      $interval %= $value;
    }

    $granularity--;
    $i++;

    if ($granularity == 0) {
      break;
    }
  }

  return $output ? $output : '0:00';
}

Update your mytheme_preprocess_node() function, with some extra code for making the formatted video duration available in your node template:

<?php
/**
 * Preprocessor for node.tpl.php template file.
 */
function foobar_preprocess_node(&$vars) {
  if ($vars['node']->type == 'page' &&
      !empty($vars['node']->field_video['und'][0]['fid'])) {
    $video_file = file_load($vars['node']->field_video['und'][0]['fid']);
    $vf = file_view_file($video_file, 'default', '');
    $vars['video'] = drupal_render($vf);

    if (!empty($vars['node']->field_video_duration['und'][0]['value'])) {
      $vars['video_duration'] = foobar_page_format_time_interval(
        $vars['node']->field_video_duration['und'][0]['value']);
    }
  }
}

Finally, update your node.tpl.php file or equivalent:

<!-- template stuff bla bla bla -->

<?php if (!empty($video)): ?>
  <?php print $video; ?>
<?php endif; ?>

<?php if (!empty($video_duration)): ?>
  <p><strong>Duration:</strong> <?php print $video_duration; ?></p>
<?php endif; ?>

<!-- more template stuff bla bla bla -->

Reload the page on your site, and lo and behold:

We have duration!
We have duration!

Final remarks

I hope this example comes in handy, for anyone else who needs to display YouTube video duration metadata in this way.

I'd also like to strongly note, that what I've demonstrated here isn't solely applicable to this specific use case. With some modification, it could easily be applied to various different related use cases. Other than duration, you could retrieve / store / display any of the other metadata fields available via the YouTube API (e.g. date video uploaded, video category, number of comments). Or, you could work with media from another source, using another Drupal media-enabled module (e.g. Media: Vimeo). Or, you could store externally-queried data for some completely different field. I encourage you to experiment and to use your imagination, when it comes to the Computed Field module. The possibilities are endless.

]]>
Natural disaster risk levels of the world's largest cities https://greenash.net.au/thoughts/2013/03/natural-disaster-risk-levels-of-the-worlds-largest-cities/ Thu, 14 Mar 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/03/natural-disaster-risk-levels-of-the-worlds-largest-cities/ Every now and again, Mother Nature reminds us that despite all of our modern technological and cultural progress, we remain mere mortals, vulnerable as always to her wrath. Human lives and human infrastructure continue to regularly fall victim to natural disasters such as floods, storms, fires, earthquakes, tsunamis, and droughts. At times, these catastrophes can even strike indiscriminately at our largest and most heavily-populated cities, including where we least expect them.

This article is a listing and an analysis of the world's largest cities (those with a population exceeding 10 million), and of their natural disaster risk level in a variety of categories. My list includes 23 cities, which represent a combined population of approximately 380 million people. That's roughly 5% of the world's population. Listing and population figures based on Wikipedia's list of metropolitan areas by population.

The world's largest cities. Satellite image courtesy of Google Maps.
The world's largest cities. Satellite image courtesy of Google Maps.

The list

City Country Population (millions) Natural disaster risks
Tokyo Japan
32.45

Summary: very well-prepared for high risk of flooding, storms, and earthquakes.

Flood risk: high
Flood preparedness: high

Storm risk: high
Storm preparedness: high

Fire risk: low
Fire preparedness: medium

Earthquake risk: high
Earthquake preparedness: high

Tsunami risk: medium
Tsunami preparedness: medium

Drought risk: low
Drought preparedness: medium

References:

Seoul Korea
25.62

Summary: could be better prepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: medium

Storm risk: high
Storm preparedness: medium

Fire risk: low
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: medium

References:

Jakarta Indonesia
23.31

Summary: critically unprepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low

References:

Delhi India
21.75

Summary: critically unprepared for high risk of flooding, storms, and drought.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: high
Drought preparedness: low

References:

Mumbai India
20.75

Summary: critically unprepared for high risk of flooding, storms, and drought.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: high
Drought preparedness: low

References:

Mexico City Mexico
20.45

Summary: could be better prepared for high risk of flooding, earthquakes, and drought.

Flood risk: high
Flood preparedness: medium

Storm risk: medium
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: high
Earthquake preparedness: medium

Tsunami risk: low
Tsunami preparedness: low

Drought risk: high
Drought preparedness: medium

References:

São Paulo Brazil
19.95

Summary: could be better prepared for high risk of flooding.

Flood risk: high
Flood preparedness: medium

Storm risk: medium
Storm preparedness: medium

Fire risk: low
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low

References:

New York United States
19.75

Summary: could be better prepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: medium

Storm risk: high
Storm preparedness: medium

Fire risk: low
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: medium

Tsunami risk: low
Tsunami preparedness: medium

Drought risk: low
Drought preparedness: medium

References:

Osaka Japan
17.38

Summary: very well-prepared for high risk of flooding, storms, and earthquakes.

Flood risk: high
Flood preparedness: high

Storm risk: high
Storm preparedness: high

Fire risk: low
Fire preparedness: medium

Earthquake risk: high
Earthquake preparedness: high

Tsunami risk: low
Tsunami preparedness: medium

Drought risk: low
Drought preparedness: medium

References:

Shanghai China
16.65

Summary: critically unprepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low

References:

Manila Philippines
16.30

Summary: critically unprepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: medium
Tsunami preparedness: low

Drought risk: low
Drought preparedness: low

References:

Hong Kong-Shenzhen China
15.80

Summary: very well-prepared for high risk of storms.

Flood risk: medium
Flood preparedness: high

Storm risk: high
Storm preparedness: high

Fire risk: medium
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: low
Drought preparedness: low

References:

Los Angeles United States
15.25

Summary: could be better prepared for high risk of fire, earthquake, and drought.

Flood risk: medium
Flood preparedness: medium

Storm risk: low
Storm preparedness: low

Fire risk: high
Fire preparedness: medium

Earthquake risk: high
Earthquake preparedness: medium

Tsunami risk: medium
Tsunami preparedness: low

Drought risk: high
Drought preparedness: medium

References:

Kolkata India
15.10

Summary: critically unprepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: medium
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low
London United Kingdom
15.01

Summary: could be better prepared for high risk of flooding.

Flood risk: high
Flood preparedness: medium

Storm risk: medium
Storm preparedness: medium

Fire risk: low
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: medium

Tsunami risk: low
Tsunami preparedness: medium

Drought risk: low
Drought preparedness: medium

References:

Moscow Russia
15.00

Summary: no high risks in area.

Flood risk: medium
Flood preparedness: low

Storm risk: low
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: low
Drought preparedness: low

References:

Cairo Egypt
14.45

Summary: could be better prepared for high risk of drought.

Flood risk: low
Flood preparedness: low

Storm risk: low
Storm preparedness: low

Fire risk: low
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: high
Drought preparedness: medium
Buenos Aires Argentina
13.17

Summary: could be better prepared for high risk of flooding.

Flood risk: high
Flood preparedness: medium

Storm risk: medium
Storm preparedness: medium

Fire risk: low
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: medium

References:

Dhaka Bangladesh
12.80

Summary: critically unprepared for high risk of flooding, storms, and drought.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: medium
Tsunami preparedness: low

Drought risk: high
Drought preparedness: low

References:

Beijing China
12.50

Summary: critically unprepared for high risk of flooding and storms.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: medium

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low

References:

Karachi Pakistan
11.80

Summary: critically unprepared for high risk of flooding, storms, and drought.

Flood risk: high
Flood preparedness: low

Storm risk: high
Storm preparedness: low

Fire risk: medium
Fire preparedness: low

Earthquake risk: medium
Earthquake preparedness: low

Tsunami risk: medium
Tsunami preparedness: low

Drought risk: high
Drought preparedness: low

References:

Rio de Janeiro Brazil
11.85

Summary: could be better prepared for high risk of flooding.

Flood risk: high
Flood preparedness: medium

Storm risk: medium
Storm preparedness: medium

Fire risk: low
Fire preparedness: low

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: medium
Drought preparedness: low

References:

Paris France
10.42

Summary: could be better prepared for high risk of flooding.

Flood risk: high
Flood preparedness: medium

Storm risk: low
Storm preparedness: medium

Fire risk: low
Fire preparedness: medium

Earthquake risk: low
Earthquake preparedness: low

Tsunami risk: low
Tsunami preparedness: low

Drought risk: low
Drought preparedness: medium

References:

Notes:

  • Flood refers to risk of the metropolitan area itself becoming inundated with water.
  • Storm refers to risk of disaster storms (as opposed to regular storms), which are variously called "hurricanes", "cyclones", "monsoons", "typhoons", and other names (depending on region / climate).
  • Fire refers to risk of wildfire / bushfire (as opposed to urban fire) from forest or wilderness areas surrounding or within the metropolitan area.
  • Earthquake refers to risk of the metropolitan area itself being shaken by seismic activity.
  • Tsunami refers to risk of a seismically-trigged ocean wave hitting the metropolitan area itself.
  • Drought refers to risk of drought affecting the agricultural region in which the metropolitan area lies.

Analysis

The list above presents quite the sobering picture: of the 23 cities analysed, 9 are critically unprepared for one or more high risks; 10 could be better prepared for one or more high risks; and only 4 are well-prepared for high risks (of which there's one that has no high risks). All in all, the majority of the inhabitants of the world's largest cities live with a signficant risk of natural disaster, for which the city is not sufficiently well-prepared.

By far the most common natural disaster plaguing the list is flooding: it affects 19 of the 23 cities (with many of these cities also at risk from storms). This is understandable, since the majority of the world's large cities are situated on the coast. 15 of the 23 cities in the list are on or very near to the seashore. What's more, about half of the 23 cities are also on or very near to a river delta, with several of them being considered "mega-delta cities" – that is, cities whose metropolitan area lies within a flood-plain.

With the methodology I've used in this analysis, it doesn't really matter what the risk of a given natural disaster striking a city is; what's significant, is how prepared a given city is to handle its most high-risk disasters. After all, if a city is very well-prepared for a high risk, then a large part of the risk effectively cancels itself out (although the risk that remains is still significant, as some cities are at risk of truly monumental disasters for which one can never fully prepare). On the other hand, if a city is critically unprepared for a high risk, this means that really there are no mitigating factors – that city will suffer tremendously when disaster hits.

It should come as no surprise, therefore, that the summary / risk level for each city depends heavily on that country's level of development. For example, the Japanese cities are some of the most disaster-prone cities in the world; but they're also safer than numerous other, less disaster-prone cities, because of Japan's incredibly high level of preparedness for natural disasters (in particular, its world-class earthquake-proof building standards, and its formidable flood-control infrastructure). At the other extreme, the Indian cities are significantly less disaster-prone than many others (in particular, India has a low earthquake risk); but they're more dangerous, due to India's poor overall urban infrastructure, and its poor or non-existent flood-control infrastructure.

Conclusion

So: if you're picking one of the world's largest cities to live in, which would be a good choice? From the list above, the clear winner is Moscow, which is the only city with no high risk of any of the world's more common natural disasters. However, it does get pretty chilly there (Moscow has the highest latitude of all the cities in the list), and Russia has plenty of other issues aside from natural disasters.

The other cities in my list with a tick of approval are the Japanese mega-cities, Tokyo and Osaka. Although Japan is one of the most earthquake-prone places on Earth, you can count on the Japanese for being about 500 years ahead of the rest of the world earthquake-proof-wise, as they are about 500 years ahead of the rest of the world technology-wise in general. Hong Kong would also be a good choice, in picking a city very well-prepared for the natural disasters that it most commonly faces.

For all of you that are living in the other mega-cities of the developed world: watch out, because you're all living in cities that could be better prepared for natural disasters. I'm looking at you Seoul, New York, Los Angeles, London, and Paris. Likewise to the cities on the list in somewhat less-developed countries: i.e. Mexico City, São Paulo, Cairo, Buenos Aires, and Rio de Janeiro. You're all lagging behind in natural disaster risk management.

As for the cities on my list that are "in the red": you should seriously consider other alternatives, before choosing to live in any of these places. The developing nations of Indonesia, India, China, The Philippines, Bangladesh, and Pakistan are home to world mega-cities; however, their population bears (and, in many cases, regularly suffers) a critical level of exposure to natural disaster risk. Jakarta, Delhi, Mumbai, Shanghai, Manila, Kolkata, Dhaka, Beijing, and Karachi: thinking of living in any of these? Think again.

Additional References

]]>
Rendering a Silex (Symfony2) app via Drupal 7 https://greenash.net.au/thoughts/2013/01/rendering-a-silex-symfony2-app-via-drupal-7/ Fri, 25 Jan 2013 00:00:00 +0000 https://greenash.net.au/thoughts/2013/01/rendering-a-silex-symfony2-app-via-drupal-7/ There's been a lot of talk recently regarding the integration of the Symfony2 components, as a fundamental part of Drupal 8's core system. I won't rabble on repeating the many things that have already been said elsewhere; however, to quote the great Bogeyman himself, let me just say that "I think this is the beginning of a beautiful friendship".

On a project I'm currently working on, I decided to try out something of a related flavour. I built a stand-alone app in Silex (a sort of Symfony2 distribution); but, per the project's requirements, I also managed to heavily integrate the app with an existing Drupal 7 site. The app does almost everything on its own, except that: it passes its output to drupal_render_page() before returning the request; and it checks that a Drupal user is currently logged-in and has a certain Drupal user role, for pages where authorisation is required.

The result is: an app that has its own custom database, its own routes, its own forms, its own business logic, and its own templates; but that gets rendered via the Drupal theming system, and that relies on Drupal data for authentication and authorisation. What's more, the implementation is quite clean (minimal hackery involved) – only a small amount of code is needed for the integration, and then (for the most part) Drupal and Silex leave each other alone to get on with their respective jobs. Now, let me show you how it's done.

Drupal setup

To start with, set up a new bare-bones Drupal 7 site. I won't go into the details of Drupal installation here. If you need help with setting up a local Apache VirtualHost, editing your /etc/hosts file, setting up a MySQL database / user, launching the Drupal installer, etc, please refer to the Drupal installation guide. For this guide, I'll be using a Drupal 7 instance that's been installed to the /www/d7silextest directory on my local machine, and that can be accessed via http://d7silextest.local.

D7 Silex test site after initial setup.
D7 Silex test site after initial setup.

Once you've got that (or something similar) up and running, and if you're keen to follow along, then keep up with me as I outline further Drupal config steps. Firstly, go to administration > people > permissions > roles, create a new role called 'administrator' (if it doesn't exist already). Then, assign the role to user 1.

Next, download the patches from Need DRUPAL_ROOT in include of template.php and Need DRUPAL_ROOT when rendering CSS include links, and apply them to your Drupal codebase. Note: these are some bugs in core, where certain PHP files are being included without properly appending the DRUPAL_ROOT prefix. As of writing, I've submitted these patches to drupal.org, but they haven't yet been committed. Please check the status of these issue threads – if they're now resolved, then you may not need to apply the patches (check exactly which version of Drupal you're using, as of Drupal 7.19 the patches are still needed).

If you're using additional Drupal contrib or custom modules, they may also have similar bugs. For example, I've also submitted Need DRUPAL_ROOT in require of include files for the Revisioning module (not yet committed as of writing), and Need DRUPAL_ROOT in require of og.field.inc for the Organic Groups module (now committed and applied in latest stable release of OG). If you find any more DRUPAL_ROOT bugs, that prevent an external script such as Symfony2 from utilising Drupal from within a subdirectory, then please patch these bugs yourself, and submit patches to drupal.org as I've done.

Enable the menu module (if it's not already enabled), and define a 'Page' content type (if not already defined). Create a new 'Page' node (in my config below, I assume that it's node 1), with a menu item (e.g. in 'main menu'). Your new test page should look something like this:

D7 Silex test site with test page.
D7 Silex test site with test page.

That's sufficient Drupal configuration for the purposes of our example. Now, let's move on to Silex.

Silex setup

To start setting up your example Silex site, create a new directory, which is outside of your Drupal site's directory tree. In this article, I'm assuming that the Silex directory is at /www/silexd7test. Within this directory, create a composer.json file with the following:

{
    "require": {
        "silex/silex": "1.0.*"
    },
    "minimum-stability": "dev"
}

Get Composer (if you don't have it), by executing this command:

curl -s http://getcomposer.org/installer | php

Once you've got Composer, installing Silex is very easy, just execute this command from your Silex directory:

php composer.phar install

Next, create a new directory called web in your silex root directory; and create a file called web/index.php, that looks like this:

<?php

/**
 * @file
 * The PHP page that serves all page requests on a Silex installation.
 */


require_once __DIR__ . '/../vendor/autoload.php';

$app = new Silex\Application();

$app['debug'] = TRUE;

$app->get('/', function() use($app) {
  return '<p>You should see this outputting ' .
    'within your Drupal site!</p>';
});

$app->run();

That's a very basic Silex app ready to go. The app just defines one route (the 'home page' route), which outputs the text You should see this outputting within your Drupal site! on request. The Silex app that I actually built and integrated with Drupal did a whole more of this – but for the purposes of this article, a "Hello World" example is all we need.

To see this app in action, in your Drupal root directory create a symlink to the Silex web folder:

ln -s /www/silexd7test/web/ silexd7test

Now you can go to http://d7silextest.local/silexd7test/, and you should see something like this:

Silex serving requests stand-alone, under Drupal web path.
Silex serving requests stand-alone, under Drupal web path.

So far, the app is running under the Drupal web path, but it isn't integrated with the Drupal site at all. It's just running its own bootstrap code, and outputting the response for the requested route without any outside help. We'll be changing that shortly.

Integration

Open up the web/index.php file again, and change it to look like this:

<?php

/**
 * @file
 * The PHP page that serves all page requests on a Silex installation.
 */


require_once __DIR__ . '/../vendor/autoload.php';

$app = new Silex\Application();

$app['debug'] = TRUE;

$app['drupal_root'] = '/www/d7silextest';
$app['drupal_base_url'] = 'http://d7silextest.local';
$app['is_embedded_in_drupal'] = TRUE;
$app['drupal_menu_active_item'] = 'node/1';

/**
 * Bootstraps Drupal using DRUPAL_ROOT and $base_url values from
 * this app's config. Bootstraps to a sufficient level to allow
 * session / user data to be accessed, and for theme rendering to
 * be invoked..
 *
 * @param $app
 *   Silex application object.
 * @param $level
 *   Level to bootstrap Drupal to. If not provided, defaults to
 *   DRUPAL_BOOTSTRAP_FULL.
 */
function silex_bootstrap_drupal($app, $level = NULL) {
  global $base_url;

  // Check that Drupal bootstrap config settings can be found.
  // If not, throw an exception.
  if (empty($app['drupal_root'])) {
    throw new \Exception("Missing setting 'drupal_root' in config");
  }
  elseif (empty($app['drupal_base_url'])) {
    throw new \Exception("Missing setting 'drupal_base_url' in config");
  }

  // Set values necessary for Drupal bootstrap from external script.
  // See:
  // http://www.csdesignco.com/content/using-drupal-data-functions-
  // and-session-variables-external-php-script
  define('DRUPAL_ROOT', $app['drupal_root']);
  $base_url = $app['drupal_base_url'];

  // Bootstrap Drupal.
  require_once DRUPAL_ROOT . '/includes/bootstrap.inc';
  if (is_null($level)) {
    $level = DRUPAL_BOOTSTRAP_FULL;
  }
  drupal_bootstrap($level);

  if ($level == DRUPAL_BOOTSTRAP_FULL &&
  !empty($app['drupal_menu_active_item'])) {
    menu_set_active_item($app['drupal_menu_active_item']);
  }
}

/**
 * Checks that an authenticated and non-blocked Drupal user is tied to
 * the current session. If not, deny access for this request.
 *
 * @param $app
 *   Silex application object.
 */
function silex_limit_access_to_authenticated_users($app) {
  global $user;

  if (empty($user->uid)) {
    $app->abort(403, 'You must be logged in to access this page.');
  }
  if (empty($user->status)) {
    $app->abort(403, 'You must have an active account in order to ' .
      'access this page.');
  }
  if (empty($user->name)) {
    $app->abort(403, 'Your session must be tied to a username to ' .
    'access this page.');
  }
}

/**
 * Checks that the current user is a Drupal admin (with 'administrator'
 * role). If not, deny access for this request.
 *
 * @param $app
 *   Silex application object.
 */
function silex_limit_access_to_admin($app) {
  global $user;

  if (!in_array('administrator', $user->roles)) {
    $app->abort(403,
                'You must be an administrator to access this page.');
  }
}

$app->get('/', function() use($app) {
  silex_bootstrap_drupal($app);
  silex_limit_access_to_authenticated_users($app);
  silex_limit_access_to_admin($app);

  $ret = '<p>You should see this outputting within your ' .
         'Drupal site!</p>';

  return !empty($app['is_embedded_in_drupal']) ?
    drupal_render_page($ret) :
    $ret;
});

$app->run();

A number of things have been added to the code in this file, so let's examine them one-by-one. First of all, some Drupal-related settings have been added to the Silex $app object. The drupal_root and drupal_base_url settings, are the critical ones that are needed in order to bootstrap Drupal from within Silex. Because the Silex script is in a different filesystem path from the Drupal site, and because it's also being served from a different URL path, these need to be manually set and passed on to Drupal.

The is_embedded_in_drupal setting allows the rendering of the page via drupal_render_page() to be toggled on or off. The script could work fine without this, and with rendering via drupal_render_page() hard-coded to always occur; allowing it to be toggled is just a bit more elegant. The drupal_menu_active_item setting, when set, triggers the Drupal menu path to be set to the path specified (via menu_set_active_item()).

The route handler for our 'home page' path now calls three functions, before going on to render the page. The first one, silex_bootstrap_drupal(), is pretty self-explanatory. The second one, silex_limit_access_to_authenticated_users(), checks the Drupal global $user object to ensure that the current user is logged-in, and if not, it throws an exception. Similarly, silex_limit_access_to_admin() checks that the current user has the 'administrator' role (with failure resulting in an exception).

To test the authorisation checks that are now in place, log out of the Drupal site, and visit the Silex 'front page' at http://d7silextest.local/silexd7test/. You should see something like this:

Silex denying access to a page because Drupal user is logged out
Silex denying access to a page because Drupal user is logged out

The drupal_render_page() function is usually – in the case of a Drupal menu callback – passed a callback (a function name as a string), and rendering is then delegated to that callback. However, it also accepts an output string as its first argument; in this case, the passed-in string is outputted directly as the content of the 'main page content' Drupal block. Following that, all other block regions are assembled, and the full Drupal page is themed for output, business as usual.

To see the Silex 'front page' fully rendered, and without any 'access denied' message, log in to the Drupal site, and visit http://d7silextest.local/silexd7test/ again. You should now see something like this:

Silex serving output that's been passed through drupal_render_page().
Silex serving output that's been passed through drupal_render_page().

And that's it – a Silex callback, with Drupal theming and Drupal access control!

Final remarks

The example I've walked through in this article, is a simplified version of what I implemented for my recent real-life project. Some important things that I modified, for the purposes of keeping this article quick 'n' dirty:

  • Changed the route handler and Drupal bootstrap / access-control functions, from being methods in a Silex Controller class (implementing Silex\ControllerProviderInterface) in a separate file, to being functions in the main index.php file
  • Changed the config values, from being stored in a JSON file and loaded via Igorw\Silex\ConfigServiceProvider, to being hard-coded into the $app object in raw PHP
  • Took out logging for the app via Silex\Provider\MonologServiceProvider

My real-life project is also significantly more than just a single "Hello World" route handler. It defines its own custom database, which it accesses via Doctrine's DBAL and ORM components. It uses Twig templates for all output. It makes heavy use of Symfony2's Form component. And it includes a number of custom command-line scripts, which are implemented using Symfony2's Console component. However, most of that is standard Silex / Symfony2 stuff which is not so noteworthy; and it's also not necessary for the purposes of this article.

I should also note that although this article is focused on Symfony2 / Silex, the example I've walked through here could be applied to any other PHP script that you might want to integrate with Drupal 7 in a similar way (as long as the PHP framework / script in question doesn't conflict with Drupal's function or variable names). However, it does make particularly good sense to integrate Symfony2 / Silex with Drupal 7 in this way, because: (a) Symfony2 components are going to be the foundation of Drupal 8 anyway; and (b) Symfony2 components are the latest and greatest components available for PHP right now, so the more projects you're able to use them in, the better.

]]>
Node.js itself is blocking, only its I/O is non-blocking https://greenash.net.au/thoughts/2012/11/nodejs-itself-is-blocking-only-its-io-is-non-blocking/ Thu, 15 Nov 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/11/nodejs-itself-is-blocking-only-its-io-is-non-blocking/ I've recently been getting my feet wet, playing around with Node.js (yes, I know – what took me so long?). I'm having a lot of fun, learning new technologies by the handful. It's all very exciting.

I just thought I'd stop for a minute, however, to point out one important detail of Node.js that had me confused for a while, and that seems to have confused others, too. More likely than not, the first feature of Node.js that you heard about, was its non-blocking I/O model.

Now, please re-read that last phrase, and re-read it carefully. Non. Blocking. I/O. You will never hear anywhere, from anyone, that Node.js is non-blocking. You will only hear that it has non-blocking I/O. If, like me, you're new to Node.js, and you didn't stop to think about what exactly "I/O" means (in the context of Node.js) before diving in (and perhaps you weren't too clear on "non-blocking", either), then fear not.

What exactly – with reference to Node.js – is blocking, and what is non-blocking? And what exactly – also with reference to Node.js – is I/O, and what is not I/O? Let me clarify, for me as much as for you.

Blocking vs non-blocking

Let's start by defining blocking. A line of code is blocking, if all functionality invoked by that line of code must terminate before the next line of code executes.

This is the way that all traditional procedural code works. Here's a super-basic example of some blocking code in JavaScript:

console.log('Peking duck');
console.log('Coconut lychee');

In this example, the first line of code is blocking. Therefore, the first line must finish doing everything we told it to do, before our CPU gives the second line of code the time of day. Therefore, we are guaranteed to get this output:

Peking duck
Coconut lychee

Now, let me introduce you to Kev the Kook. Rather than just outputting the above lines to console, Kev wants to thoroughly cook his Peking duck, and exquisitely prepare his coconut lychee, before going ahead and brashly telling the guests that the various courses of their dinner are ready. Here's what we're talking about:

function prepare_peking_duck() {
  var duck = slaughter_duck();
  duck = remove_feathers(duck);
  var oven = preheat_oven(180, 'Celsius');
  duck = marinate_duck(duck, "Mr Wu's secret Peking herbs and spices");
  duck = bake_duck(duck, oven);
  serve_duck_with(duck, 'Spring rolls');
}

function prepare_coconut_lychee() {
  bowl = get_bowl_from_cupboard();
  bowl = put_lychees_in_bowl(bowl);
  bowl = put_coconut_milk_in_bowl(bowl);
  garnish_bowl_with(bowl, 'Peanut butter');
}

prepare_peking_duck();
console.log('Peking duck is ready');

prepare_coconut_lychee();
console.log('Coconut lychee is ready');

In this example, we're doing quite a bit of grunt work. Also, it's quite likely that the first task we call will take considerably longer to execute than the second task (mainly because we have to remove the feathers, that can be quite a tedious process). However, all that grunt work is still guaranteed to be performed in the order that we specified. So, the Peking duck will always be ready before the coconut lychee. This is excellent news, because eating the coconut lychee first would simply be revolting, everyone knows that it's a dessert dish.

Now, let's suppose that Kev previously had this code implemented in server-side JavaScript, but in a regular library that provided only blocking functions. He's just decided to port the code to Node.js, and to re-implement it using non-blocking functions.

Up until now, everything was working perfectly: the Peking duck was always ready before the coconut lychee, and nobody ever went home with a sour stomach (well, alright, maybe the peanut butter garnish didn't go down so well with everyone… but hey, just no pleasing some folks). Life was good for Kev. But now, things are more complicated.

In contrast to blocking, a line of code is non-blocking, if the next line of code may execute before the line of functionality invoked by that line of code has terminated.

Back to Kev's Chinese dinner. It turns out that in order to port the duck and lychee code to Node.js, pretty much all of his high-level functions will have to call some non-blocking Node.js library functions. And the way that non-blocking code essentially works is: if a function calls any other function that is non-blocking, then the calling function itself is also non-blocking. Sort of a viral, from-the-inside-out effect.

Kev hasn't really got his head around this whole non-blocking business. He decides, what the hell, let's just implement the code exactly as it was before, and see how it works. To his great dismay, though, the results of executing the original code with Node.js non-blocking functions is not great:

Peking duck is ready
Coconut lychee is ready

/path/to/prepare_peking_duck.js:9
    duck.toString();
         ^
TypeError: Cannot call method 'toString' of undefined
    at remove_feathers (/path/to/prepare_peking_duck.js:9:8)

This output worries Kev for two reasons. Firstly, and less importantly, it worries him because there's an error being thrown, and Kev doesn't like errors. Secondly, and much more importantly, it worries him because the error is being thrown after the program successfully outputs both "Peking duck is ready" and "Coconut lychee is ready". If the program isn't able to get past the end of remove_feathers() without throwing a fatal error, then how could it possibly have finished the rest of the duck and lychee preparation?

The answer, of course, is that all of Kev's dinner preparation functions are now effectively non-blocking. This means that the following happened when Kev ran his script:

Called prepare_peking_duck()
  Called slaughter_duck()
    Non-blocking code in slaughter_duck() doesn't execute until
    after current blocking code is done. Is supposed to return an int,
    but actually returns nothing
  Called remove_feathers() with return value of slaughter_duck()
  as parameter
    Non-blocking code in remove_feathers() doesn't execute until
    after current blocking code is done. Is supposed to return an int,
    but actually returns nothing
  Called other duck-preparation functions
    They all also contain non-blocking code, which doesn't execute
    until after current blocking code is done
Printed 'Peking duck is ready'
Called prepare_coconut_lychee()
  Called lychee-preparation functions
    They all also contain non-blocking code, which doesn't execute
    until after current blocking code is done
Printed 'Coconut lychee is ready'
Returned to prepare_peking_duck() context
  Returned to slaughter_duck() context
    Executed non-blocking code in slaughter_duck()
  Returned to remove_feathers() context
    Error executing non-blocking code in remove_feathers()

Before too long, Kev works out – by way of logical reasoning – that the execution flow described above is indeed what is happening. So, he comes to the realisation that he needs to re-structure his code to work the Node.js way: that is, using a whole lotta callbacks.

After spending a while fiddling with the code, this is what Kev ends up with:

function prepare_peking_duck(done) {
  slaughter_duck(function(err, duck) {
    remove_feathers(duck, function(err, duck) {
      preheat_oven(180, 'Celsius', function(err, oven) {
        marinate_duck(duck,
                      "Mr Wu's secret Peking herbs and spices",
                      function(err, duck) {
          bake_duck(duck, oven, function(err, duck) {
            serve_duck_with(duck, 'Spring rolls', done);
          });
        });
      });
    });
  });
}

function prepare_coconut_lychee(done) {
  get_bowl_from_cupboard(function(err, bowl) {
    put_lychees_in_bowl(bowl, function(err, bowl) {
      put_coconut_milk_in_bowl(bowl, function(err, bowl) {
        garnish_bowl_with(bowl, 'Peanut butter', done);
      });
    });
  });
}

prepare_peking_duck(function(err) {
  console.log('Peking duck is ready');
});

prepare_coconut_lychee(function(err) {
  console.log('Coconut lychee is ready');
});

This runs without errors. However, it produces its output in the wrong order – this is what it spits onto the console:

Coconut lychee is ready
Peking duck is ready

This output is possible because, with the code in its current state, the execution of both of Kev's preparation routines – the Peking duck preparation, and the coconut lychee preparation – are sent off to run as non-blocking routines; and whichever one finishes executing first gets its callback fired before the other. And, as mentioned, the Peking duck can take a while to prepare (although utilising a cloud-based grid service for the feather plucking can boost performance).

Now, as we already know, eating the coconut lychee before the Peking duck causes you to fart a Szechuan Stinker, which is classified under international law as a chemical weapon. And Kev would rather not be guilty of war crimes, simply on account of a small culinary technical hiccup.

This final execution-ordering issue can be fixed easily enough, by converting one remaining spot to use a nested callback pattern:

prepare_peking_duck(function(err) {
  console.log('Peking duck is ready');
  prepare_coconut_lychee(function(err) {
    console.log('Coconut lychee is ready');
  });
});

Finally, Kev can have his lychee and eat it, too.

I/O vs non-I/O

I/O stands for Input/Output. I know this because I spent four years studying Computer Science at university.

Actually, that's a lie. I already knew what I/O stood for when I was about ten years old.

But you know what I did learn at university? I learnt more about I/O than what the letters stood for. I learnt that the technical definition of a computer program, is: an executable that accepts some discrete input, that performs some processing, and that finishes off with some discrete output.

Actually, that's a lie too. I already knew that from high school computer classes.

You know what else is a lie? (OK, not exactly a lie, but at the very least it's confusing and incomplete). The description that Node.js folks give you for "what I/O means". Have a look at any old source (yes, pretty much anywhere will do). Wherever you look, the answer will roughly be: I/O is working with files, doing database queries, and making web requests from your app.

As I said, that's not exactly a lie. However, that's not what I/O is. That's a set of examples of what I/O is. If you want to know what the definition of I/O actually is, let me tell you: it's any interaction that your program makes with anything external to itself. That's it.

I/O usually involves your program reading a piece of data from an external source, and making it available as a variable within your code; or conversely, taking a piece of data that's stored as a variable within your code, and writing it to an external source. However, it doesn't always involve reading or writing data; and (as I'm trying to emphasise), it doesn't need to involve that, in order to fall within the definition of I/O for your program.

At a basic technical level, I/O is nothing more than any instance of your program invoking another program on the same machine. The simplest example of this, is executing another program via a command-line statement from your program. Node.js provides the non-blocking I/O function child_process.exec() for this purpose; running shell commands with it is pretty easy.

The most common and the most obvious example of I/O, reading and writing files, involves (under the hood) your program invoking the various utility programs provided by all OSes for interacting with files. open is another program somewhere on your system. read, write, close, stat, rename, unlink – all individual utility programs living on your box.

From this perspective, a DBMS is just one more utility program living on your system. (At least, the client utility lives on your system – where the server lives, and how to access it, is the client utility's problem, not yours). When you open a connection to a DB, perform some queries (regardless of them being read or write queries), and then close the connection, the only really significant point (for our purposes) is that you're making various invocations to a program that's external to your program.

Similarly, all network communication performed by your program is nothing more than a bunch of invocations to external utility programs. Although these utility programs provide the illusion (both to the programmer and to the end-user) that your program is interacting directly with remote sources, in reality the direct interaction is only with the utilities on your machine for opening a socket, port mapping, TCP / UDP packet management, IP addressing, DNS lookup, and all the other gory details.

And, of course, working with HTTP is simply dealing with one extra layer of utility programs, on top of all the general networking utility programs. So, when you consider it from this point of view, making a JSON API request to an online payment broker over SSL, is really no different to executing the pwd shell command. It's all I/O!

I hope I've made it crystal-clear by now, what constitutes I/O. So, conversely, you should also now have a clearer idea of exactly what constitutes non-I/O. In a nutshell: any code that does not invoke any external programs, any code that is completely insular and that performs all processing internally, is non-I/O code.

The philosophy behind Node.js, is that most database-driven web apps – what with their being database-driven, and web-based, and all – don't actually have a whole lot of non-I/O code. In most such apps, the non-I/O code consists of little more than bits 'n' pieces that happen in between the I/O bits: some calculations after retrieving data from the database; some rendering work after performing the business logic; some parsing and validation upon receiving incoming API calls or form submissions. It's rare for web apps to perform any particularly intensive tasks, without the help of other external utilities.

Some programs do contain a lot of non-I/O code. Typically, these are programs that perform more heavy processing based on the direct input that they receive. For example, a program that performs an expensive mathematical computation, such as finding all Fibonacci numbers up to a given value, may take a long time to execute, even though it only contains non-I/O code (by the way, please don't write a Fibonacci number app in Node.js). Similarly, image processing utility programs are generally non-I/O, as they perform a specialised task using exactly the image data provided, without outside help.

Putting it all together

We should now all be on the same page, regarding blocking vs non-blocking code, and regarding I/O vs non-I/O code. Now, back to the point of this article, which is to better explain the key feature of Node.js: its non-blocking I/O model.

As others have explained, in Node.js everything runs in parallel, except your code. What this means is that all I/O code that you write in Node.js is non-blocking, while (conversely) all non-I/O code that you write in Node.js is blocking.

So, as Node.js experts are quick to point out: if you write a Node.js web app with non-I/O code that blocks execution for a long time, your app will be completely unresponsive until that code finishes running. As I said: please, no Fibonacci in Node.js.

When I started writing in Node.js, I was under the impression that the V8 engine it uses automagically makes your code non-blocking, each time you make a function call. So I thought that, for example, changing a long-running while loop to a recursive loop would make my (completely non-I/O) code non-blocking. Wrong! (As it turns out, if you'd like a language that automagically makes your code non-blocking, apparently Erlang can do it for you – however, I've never used Erlang, so can't comment on this).

In fact, the secret to non-blocking code in Node.js is not magic. It's a bag of rather dirty tricks, the most prominent (and the dirtiest) of which is the process.nextTick() function.

As others have explained, if you need to write truly non-blocking processor-intensive code, then the correct way to do it is to implement it as a separate program, and to then invoke that external program from your Node.js code. Remember:

Not in your Node.js code == I/O == non-blocking

I hope this article has cleared up more confusion than it's created. I don't think I've explained anything totally new here, but I believe I've explained a number of concepts from a perspective that others haven't considered very thoroughly, and with some new and refreshing examples. As I said, I'm still brand new to Node.js myself. Anyway, happy coding, and feel free to add your two cents below.

]]>
Batch updating Drupal 7 field data https://greenash.net.au/thoughts/2012/11/batch-updating-drupal-7-field-data/ Thu, 08 Nov 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/11/batch-updating-drupal-7-field-data/ On a number of my recently-built Drupal sites, I've become a fan of using the Computed Field module to provide a "search data" field, as a Views exposed filter. This technique has been documented by other folks here and there (I didn't invent it), so I won't cover its details here. Basically, it's a handy way to create a search form that searches exactly the fields you're interested in, thus providing you with more fine-grained control than the core Drupal search module, and with much less installation / configuration overhead than Apache Solr.

On one such site, which has about 4,000+ nodes that are searchable via this technique, I needed to add another field to the index, and re-generate the Computed Field data for every node. This data normally only gets re-generated when each individual node is saved. In my case, that would not be sufficient - I needed the entire search index refreshed immediately.

The obvious solution, would be to whip up a quick script that loops through all the nodes in question, and that calls node_save() on each pass through the loop. However, this solution has two problems. Firstly, node_save() is really slow (particularly when the node has a lot of other fields, such as was my case). So slow, in fact, that in my case I was fighting a losing battle against PHP "maximum execution time exceeded" errors. Secondly, node_save() is slow unnecessarily, as it re-saves all the data for all a node's fields (plus it invokes a bazingaful of hooks), whereas we only actually need to re-save the data for one field (and we don't need any hooks invoked, thanks).

In the interests of both speed and cutting-out-the-cruft, therefore, I present here an alternative solution: getting rid of the middle man (node_save()), and instead invoking the field_storage_write callback directly. Added bonus: I've implemented it using the Batch API functionality available via Drupal 7's hook_update_N().

Show me the code

The below code uses a (pre-defined) Computed field called field_search_data, and processes nodes of type event, news or page. It also sets the limit per batch run to 50 nodes. Naturally, all of this should be modified per your site's setup, when borrowing the code.

<?php
/**
 * Batch update computed field values for 'field_search_data'.
 */
function mymodule_update_7000(&$sandbox) {
  $entity_type = 'node';
  $field_name = 'field_search_data';
  $langcode = 'und';
  $storage_module = 'field_sql_storage';

  $field_id = db_query('SELECT id FROM {field_config} WHERE ' .
  'field_name = :field_name', array(
    ':field_name' => $field_name
    ))->fetchField();

  $field = field_info_field($field_name);
  $types = array(
    'event',
    'news',
    'page',
  );

  // Go through all published nodes in all of the above node types,
  // and generate a new 'search_data' computed value.
  $instance = field_info_instance($entity_type,
                                  $field_name,
                                  $bundle_name);

  if (!isset($sandbox['progress'])) {
    $sandbox['progress'] = 0;
    $sandbox['last_nid_processed'] = -1;
    $sandbox['max'] = db_query('SELECT COUNT(*) FROM {node} WHERE ' .
      'type IN (:types) AND status = 1 ORDER BY nid', array(
        ':types' => $types
      ))->fetchField();

    // I chose to delete existing data for this field, so I can
    // clearly monitor in phpMyAdmin the field data being re-generated.
    // Not necessary to do this.
    // NOTE: do not do this if you have actual important data in
    // this field! In my case it's just a search index, so it's OK.
    // May not be so cool in your case.
    db_query('TRUNCATE TABLE {field_data_' . $field_name . '}');
    db_query('TRUNCATE TABLE {field_revision_' . $field_name . '}');
  }

  $limit = 50;
  $result = db_query_range('SELECT nid FROM {node} WHERE ' .
    'type IN (:types) AND status = 1 AND nid > :lastnid ORDER BY nid',
    0, $limit, array(
      ':types' => $types,
      ':lastnid' => $sandbox['last_nid_processed']
    ));

  while ($nid = $result->fetchField()) {
    $entity = node_load($nid);

    if (!empty($entity->nid)) {
      $items = isset($entity->{$field_name}[$langcode]) ?
                 $entity->{$field_name}[$langcode] :
                 array();

      _computed_field_compute_value($entity_type, $entity, $field,
                                    $instance, $langcode, $items);

      if ($items !== array() ||
          isset($entity->{$field_name}[$langcode])) {
        $entity->{$field_name}[$langcode] = $items;

        // This only writes the data for the single field we're
        // interested in to the database. Much less expensive than
        // the easier alternative, which would be to node_save()
        // every node.
        module_invoke($storage_module, 'field_storage_write',
                      $entity_type, $entity, FIELD_STORAGE_UPDATE,
                      array($field_id));
      }
    }

    $sandbox['progress']++;
    $sandbox['last_nid_processed'] = $nid;
  }

  if (empty($sandbox['max'])) {
    $sandbox['#finished'] = 1.0;
  }
  else {
    $sandbox['#finished'] = $sandbox['progress'] / $sandbox['max'];
  }

  if ($sandbox['#finished'] == 1.0) {
    return t('Updated \'search data\' computed field values.');
  }
}
 

The feature of note in this code, is that we're updating Field API data without calling node_save(). We're doing this by manually generating the new Computed Field data, via _computed_field_compute_value(); and by then invoking the field_storage_write callback with the help of module_invoke().

Unfortunately, doing it this way is a bit complicated - these functions expect a whole lot of Field API and Entity API parameters to be passed to them, and preparing all these parameters is no walk in the park. Calling node_save() takes care of all this legwork behind the scenes.

This approach still isn't lightning-fast, but it performs significantly better than its alternative. Plus, by avoiding the usual node hook invocations, we also avoid any unwanted side-effects of simulating a node save operation (e.g. creating a new revision, affecting workflow state).

To execute the procedure as it's implemented here, all you need to do is visit update.php in your browser (or run drush updb from your terminal), and it will run as a standard Drupal database update. In my case, I chose to implement it in hook_update_N(), because: it gives me access to the Batch API for free; it's guaranteed to run only once; and it's protected by superuser-only access control. But, for example, you could also implement it as a custom admin page, calling the Batch API from a menu callback within your module.

Just one example

The use case presented here – a Computed Field used as a search index for Views exposed filters – is really just one example of how this technique could come in handy. What I'm trying to provide in this article, is a code template that can be applied to any scenario in which a single field (or a small number of fields) needs to be modified across a large volume of existing nodes (or other entities).

I can think of quite a few other potential scenarios. A custom "phone" field, where a region code needs to be appended to all existing data. A "link" field, where any existing data missing a "www" prefix needs to have it added. A node reference field, where certain saved Node IDs need to be re-mapped to new values, because the old pages have been archived. Whatever your specific requirement, I hope this code snippet makes your life a bit easier, and your server load a bit lighter.

]]>
How compatible are the world's major religions? https://greenash.net.au/thoughts/2012/10/how-compatible-are-the-worlds-major-religions/ Wed, 17 Oct 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/10/how-compatible-are-the-worlds-major-religions/ There are a tonne of resources around that compare the world's major religions, highlighting the differences between each. There are some good comparisons of Eastern vs Western religions, and also numerous comparisons of Christianity vs non-Christianity.

However, I wasn't able to find any articles that specifically investigate the compatibility between the world's major religions. The areas where different religions are "on the same page", and are able to understand each other and (in the better cases) to respect each other; vs the areas where they're on a different wavelength, and where a poor capacity for dialogue is a potential cause for conflict.

I have, therefore, taken the liberty of penning such an analysis myself. What follows is a very humble list of aspects in which the world's major religions are compatible, vs aspects in which they are incompatible.

Compatible:

  • Divinity (usually although not universally manifested by the concept of a G-d or G-ds; this is generally a religion's core belief)
  • Sanctity (various events, objects, places, and people are considered sacred by the religion)
  • Community (the religion is practiced by more than one person; the religion's members assemble in order to perform significant tasks together; the religion has the fundamental properties of a community – i.e. a start date, a founder or founders, a name / label, a size as measured by membership, etc)
  • Personal communication with the divine and/or personal expression of spirituality (almost universally manifested in the acts of prayer and/or meditation)
  • Stories (mythology, stories of the religion's origins / founding, parables, etc)
  • Membership and initiation (i.e. a definition of "who is a member" of the religion, and defined methods of obtaining membership – e.g. by birth, by initiation ritual, by force)
  • Death rites (handling of dead bodies – e.g. burial, cremation; mourning rituals; belief in / position regarding one's fate following death)
  • Material expression, often (although not always) involving symbolism (e.g. characteristic clothing, music, architecture, and artwork)
  • Ethical guidance (in the form of books, oral wisdom, fundamental precepts, laws, codes of conduct, etc – although it should also be noted that religion and ethics are two different concepts)
  • Social guidance (marriage and family; celebration of festivities and special occasions; political views; behaviour towards various societal groups e.g. children, elders, community leaders, disadvantaged persons, members of other religions)
  • Right and wrong, in terms of actions and/or thoughts (i.e. definition of "good deeds", and of "sins"; although the many connotations of sin – e.g. punishment, divine judgment, consequences in the afterlife – are not universal)
  • Common purpose (although it's impossible to definitively state what religion's purpose is – e.g. religion provides hope; "religion's purpose is to provide a sense of purpose"; religion provides access to the spiritual and the divine; religion exists to facilitate love and compassion – also plenty of sceptical opinions, e.g. religion is the "opium of the masses"; religion is superstition and dogma for fools)
  • Explanation of the unknown (religion provides answers where reason and science cannot – e.g. creation, afterlife)

Incompatible:

  • The nature of divinity (one G-d vs many G-ds; G-d-like personification of divinity vs more abstract concept of a divine force / divine plane of existence; infinite vs constrained extent of divine power)
  • Acknowledgement of other religions (not all religions even acknowledge the existence of others; of those that do, many refuse to acknowledge their validity; and of those that acknowledge validity, most consider other religions as "inferior")
  • Tolerance of other religions (while some religions encourage harmony with the rest of the world, other religions promote various degrees of intolerance – e.g. holy war, forced conversion, socio-economic discrimination)
  • Community structure (religious communities range from strict bureaucratic hierarchies, to unstructured liberal movements, with every possible shade of grey in between)
  • What has a "soul" (all objects in the universe, from rocks upward, have a soul; vs only living organisms have a soul; vs only humans have a soul; vs there is no such thing as a soul)
  • Afterlife (re-incarnation vs eternal afterlife vs soul dies with body; consequences, if any, of behaviour in life on what happens after death)
  • Acceptable social norms (monogamous vs polygamous marriage; fidelity vs open relationships; punishment vs leniency towards children; types of prohibited relationships)
  • Form of rules (strict laws with strict punishments; vs only general guidelines / principles)
  • Ethical stances (on a broad range of issues, e.g. abortion, drug use, homosexuality, tattoos / piercings, blood transfusions, organ donation)
  • Leader figure(s) (Christ vs Moses vs Mohammed vs Buddha vs saints vs pagan deities vs Confucius)
  • Holy texts (Qu'ran vs Bible vs Torah vs Bhagavad Gita vs Tripitaka)
  • Ritual manifestations (differences in festivals; feasting vs fasting vs dietary laws; song, dance, clothing, architecture)

Why can't we be friends?

This quick article is my take on the age-old question: if all religions are supposedly based on universal peace and love, then why have they caused more war and bloodshed than any other force in history?

My logic behind comparing religions specifically in terms of "compatibility", rather than simply in terms of "similarities and differences", is that a compatibility analysis should yield conclusions that are directly relevant to the question that we're all asking (i.e. Why can't we be friends?). Logically, if religions were all 100% compatible with each other, then they'd never have caused any conflict in all of human history. So where, then, are all those pesky incompatibilities, that have caused peace-avowing religions to time and again be at each others' throats?

The answer, I believe, is the same one that explains why Java and FORTRAN don't get along well (excuse the geek reference). They both let you write computer programs – but on very different hardware, and in very different coding styles. Or why Chopin fans and Rage Against the Machine fans aren't best friends. They both like to listen to music, but at very different decibels, and with very different amounts of tattoos and piercings applied. Or why a Gemini and a Cancer weren't meant for each other (if you happen to believe in astrology, which I don't). They're both looking for companionship in this big and lonely world, but they laugh and cry in different ways, and the fact is they'll just never agree on whether sushi should be eaten with a fork or with chopsticks.

Religions are just one more parallel. They all aim to bring purpose and hope to one's life; but they don't always quite get there, because along the way they somehow manage to get bogged down discussing on which day of the week only raspberry yoghurt should be eaten, or whether the gates of heaven are opened by a lifetime of charitable deeds or by just ringing the buzzer.

Religion is just one more example of a field where the various competing groups all essentially agree on, and work towards, the same basic purpose; but where numerous incompatibilities arise due to differences in their implementation details.

Perhaps religions could do with a few IEEE standards? Although, then again, perhaps if the world can't even agree on a globally compatible standard for something as simple as what type of electrical plug to use, I doubt there's any hope for religion.

]]>
How close is the European Union to being a federation? https://greenash.net.au/thoughts/2012/08/how-close-is-the-european-union-to-being-a-federation/ Tue, 28 Aug 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/08/how-close-is-the-european-union-to-being-a-federation/ There has been considerable debate over the past decade or so, regarding the EU and federalism. Whether the EU is already a federation. Whether the EU is getting closer to being a federation. Whether the EU has the intention of becoming a federation. Just what the heck it means, anyway, to be a federation.

This article is my quick take, regarding the current status of the federal-ness of the EU. Just a simple layman's opinion, on what is of course quite a complex question. Perhaps not an expert analysis; but, hopefully, a simpler and more concise run-down than experts elsewhere have provided.

EU as a federation – yes or no?

The fruit salad of modern Europe.
The fruit salad of modern Europe.

(Image courtesy of Probert Encyclopaedia).

Yes:

  • Free trade within the Union, no customs tariffs within the Union
  • Unified representation in the WTO
  • Unified environmental protection law (and enforcement of such law)

No:

  • Single currency (currently only uniform within the Eurozone, of which several EU member states are not a part)
  • Removal of border controls (some borders still exist, most notably between UK / Ireland and the rest of Europe)
  • Central bank (the ECB only applies to the Eurozone, not the whole EU; the ECB is less powerful than other central banks around the world, as member states still maintain their own individual central banks)
  • Integrated legislative system (European Union law still based largely on treaties rather than statutes / precedents; most EU law applies only indirectly on member states; almost no European Union criminal law)
  • Federal constitution (came close, but was ultimately rejected; current closest entity is the Treaty of Lisbon)
  • Federal law enforcement (Europol has no executive powers, and is not a federal police force; the police forces and other law-enforcement bodies of each member state still maintain full responsibility and full jurisdiction)
  • Integrated judicial system (European Court of Justice only has jurisdiction over EU law, has neither direct nor appellate jurisdiction over the laws of member states; as such, the ECJ is not a federal supreme / high court)
  • Single nationality (each member state still issues its own passports, albeit with "EU branding"; member state citizenships / nationalities haven't been replaced with single "European" citizenship / nationality)
  • Unified immigration law (each member state still has jurisdiction over most forms of immigration, including the various permanent residency categories and also citizenship; EU has only unified immigration in specific cases, such as visa-free periods for eligible tourists, and the Blue Card for skilled work visas)
  • Military (each member state maintains its own military, although the EU does co-ordinate defence policy between members; EU itself only has a peacekeeping force)
  • Taxation (EU does not tax citizens directly, only charges a levy on the government of each member state)
  • Unified health care system (still entirely separate systems for each member state)
  • Unified education system (still entirely separate systems for each member state)
  • Unified foreign relations (member state embassies worldwide haven't been replaced with "European" embassies; still numerous treaties and bilateral relations exist directly between member states and other nations worldwide)
  • Unified representation in the UN (each member-state still has its own UN seat)
  • Unified national symbols (the EU has a flag, an anthem, various headquarter buildings, a president, a capital city, official languages, and other symbols; but member states retain their own symbols in all of the above; and the symbols of individual member states are generally still of greater importance than the EU symbols)
  • Sovereignty under international law (each member-state is still a sovereign power)

Verdict

The EU is still far from being a federated entity, in its present incarnation. It's also highly uncertain whether the EU will become more federated in the future; and it's generally accepted that at the moment, many Europeans have no desire for the EU to federate further.

Europe has achieved a great deal in its efforts towards political and economic unity. Unfortunately, however, a number of the European nations have been dragged kicking and screaming every step of the way. On account of this, there have been far too many compromises made, mainly in the form of agreeing to exceptions and exclusions. There are numerous binding treaties, but there is no constitution. There is a quasi-supreme court, but it has no supreme jurisdiction. There is a single currency and a border-free zone, apart from where there isn't. In fact, there is even a European Union, apart from where there isn't (with Switzerland and Norway being the most conspicuously absent parties).

Federalism just doesn't work like that. In all the truly federated unions in the world, all of the above issues have been resolved unequivocally – no exceptions, no special agreements. Whichever way you look at it – by comparison with international standards; by reference to formal definitions; or simply by logical reasoning and with a bit of common sense – the European Union is an oxymoron at best, and the United States of Europe remains an improbable dream.

]]>
Syria, past and present: mover and shaken https://greenash.net.au/thoughts/2012/08/syria-past-and-present-mover-and-shaken/ Sun, 19 Aug 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/08/syria-past-and-present-mover-and-shaken/ For the past few weeks, the world's gaze has focused on Syria, a nation currently in the grip of civil war. There has been much talk about the heavy foreign involvement in the conflict — both of who's been fuelling the fire of rebel aggression, and of who's been defending the regime against global sanctions and condemnation. While it has genuine grassroots origins – and while giving any one label to it (i.e. to an extremely complex situation) would be a gross over-simplification — many have described the conflict as a proxy war involving numerous external powers.

Foreign intervention is nothing new in Syria, which is the heart of one of the most ancient civilised regions in the world. Whether it be Syria's intervention in the affairs of others, or the intervention of others in the affairs of Syria – both of the above have been going on unabated for thousands of years. With an alternating role throughout history as either a World Power in its own right, or as a point of significance to other World Powers (with the latter being more often the case), Syria could be described as a serious "mover and shaken" kind of place.

This article examines, over the ages, the role of the land that is modern-day Syria (which, for convenience's sake and at the expense of anachronism, I will continue to refer to as "Syria"), in light of this theme. It is my endeavour that by exploring the history of Syria in this way, I am able to highlight the deep roots of "being influential" versus "being influenced by" – a game that Syria has been playing expertly for millennia – and that ultimately, I manage to inform readers from a new angle, regarding the current tragic events that are occurring there.

Ancient times

The borders of Syria in the ancient world were not clearly defined; however, for as far back as its recorded history extends, the region has been centred upon the cities of Damascus and Aleppo. These remain the two largest and most important cities in Syria to this day. They are also both contenders for the claim of oldest continuously-inhabited city in the world.

One or the other of these two cities has almost always been the seat of power; and on various occasions, Syria has been reduced (by the encroachment of surrounding kingdoms / empires) to little more than the area immediately surrounding one or both of these cities. From the capital, the dominion of Syria has generally extended west to the coastal plains region (centred on the port of Latakia), east to the Euphrates river and beyond (the "Al-Jazira" region), and south to the Hawran Plateau.

Syria's recorded history begins in the Late Bronze Age / Early Iron Age, c. 1200 BC. In this era, the region was populated and ruled by various ancient kingdoms, including the Phoenicians (based in modern-day Lebanon, to the west of Syria), the Hittites (based in modern-day Turkey, to the north of Syria), the Assyrians (based in modern-day Northern Iraq, to the east of Syria), and the ancient Egyptians. Additionally, Syria was often in contact (both friendly and hostile) with the ancient kingdom of Israel, and with the other Biblical realms — including Ammon, Moab, Edom (all in modern-day Jordan), and Philistia (in modern day Israel and Gaza) — which all lay to the south. Most importantly, however, it was around this time that the Arameans emerged.

The Arameans can be thought of as the original, defining native tribe of Syria. The Arameans began as a small kingdom in southern Syria, where they captured Damascus and made it their capital; this also marked the birth of Damascus as a city of significance. The Arameans' early conquests included areas in southern modern-day Lebanon (such as the Bekaa Valley), and in northern modern-day Israel (such as Rehov). Between 1100 BC and 900 BC, the Aramean kingdoms expanded northward to the Aleppo area, and eastward to the Euphrates area. All of this area (i.e. basically all of modern-day Syria) was in ancient times known as Aram, i.e. land of the Arameans.

Aram in the ancient Levant (c. 1000 BC*). Satellite image courtesy of Google Earth.
Aram in the ancient Levant (c. 1000 BC*). Satellite image courtesy of Google Earth.

*Note: Antioch was not founded until c. 320 BC; it is included on this map as a point of reference, due to its significance in later ancient Syrian history.

The story of Aramaic

It is with the Arameans that we can observe the first significant example, in Syria's long history, of a land that has its own distinct style of "influencing" and of "being influenced by". The Arameans are generally regarded by historians as a weak civilisation, that was repeatedly conquered and dominated by neighbouring empires. They never united into a single kingdom; rather, they were a loose confederation of city-states and tribes. The Aramean civilisation in its original form – i.e. as independent states, able to assert their self-determination – came to an end c. 900 BC, when the entire region was subjugted by the Neo Assyrian Empire. Fairly clear example of "being influenced by".

Ironically, however, this subjugation was precisely the event that led to the Arameans subsequently leaving a profound and long-lasting legacy upon the entire region. During the rule of the Neo Assyrians in Syria, a significant portion of the Aramean population migrated – both voluntarily and under duress – to the Assyrian heartland and to Babylonia. Once there, the Aramaic language began to spread: first within the Empire's heartland; and ultimately throughout the greater Empire, which at its height included most of the Fertile Crescent of the ancient Middle East.

The Aramaic language was the lingua franca of the Middle East between approximately 700 BC and 700 AD. Aramaic came to displace its "cousin" languages, Hebrew and Phoenician, in the areas of modern-day Israel and Lebanon. Hebrew is considered the "language of the Jews" and the "language of the Bible"; however, for the entire latter half or so of Bibical Israel's history, the language of conversation was Aramaic, with Hebrew relegated to little more than ritual and scriptural use. It is for this reason that Jesus spoke Aramaic; and it is also for this reason that most of the later Jewish scriptural texts (including several books of the Tanakh, and almost the entire Talmud) were written in Aramaic.

Aramaic included various dialects: of these, the most influential was Syriac, which itself evolved into various regional sub-dialects. Syriac was originally the dialect used by the Imperial Assyrians in their homeland – but in later years, it spread west to northern Syria and to Turkey; and east to Persia, and even as far as India. Syriac played a significant role in the early history of Christianity, and a small number of Christian groups continue to read Syriac Christian texts to this day. Another important dialect of ancient Aramaic was Mandaic, which was the dominant dialect spoken by those Aramaic speakers who settled in ancient Persia.

Although not a direct descendant of Aramaic, Arabic is another member of the Semitic language family; and spoken Arabic was heavily influenced by Aramaic, in the centuries preceding the birth of Islam. The Arabic writing system is a direct descendant of the Nabatean (ancient Jordanian) Aramaic writing system. With the rise of Islam, from c. 630 AD onwards, Arabic began to spread throughout the Middle East, first as the language of religion, then later as the language of bureaucracy, and ultimately as the new lingua franca. As such, it was Arabic that finally ended the long and influential dominance of Aramaic in the region. To this day, the majority of the formerly Aramaic-speaking world – including Syria itself – now uses Arabic almost universally.

Aramaic remains a living language in the modern world, although it is highly endangered. To this day, Aramaic's roots in ancient Aram are attested to, by the fact that the only remaining native speakers of (non-Syriac / non-Mandaic) Aramaic, are the residents of a handful of remote villages, in the mountains of Syria near Damascus. It seems that Aramaic's heyday, as the de facto language of much of the civilised world, has long passed. Linguistically speaking, Syria has long since been "under the influence"; nevertheless, Syria's linguistic heyday still lives on in an isolated corner of the nation's patchwork.

Syria and the Empires

After the conquest of Syria by the Assyrians in c. 900 BC, Syria continued to be ruled by neighbouring or distant empires for the next 1,500 years. Towards the end of the 7th century BC, the Assyrians were overshadowed by the Babylonians, and by c. 600 BC the Babylonians had conquered Syria. Shortly after, the Babylonians were overwhelmed by the growing might of the Persian Empire, and by c. 500 BC Syria was under Persian dominion. Little is known about Syria during these years, apart from accounts of numerous rebellions (particularly under Assyrian rule). However, it seems unlikely that the changes of governance in this era had any noticeable cultural or political effect on Syria.

All that changed c. 330 BC, when Alexander the Great conquered Syria – along with conquering virtually the entire Persian Empire in all its vastness – and Syria, for the first time, fell under the influence of an Empire to its west, rather than to its east (it also came to be known as "Syria" only from this time onward, as the name is of Greek origin). The Greeks built a new capital, Antioch, which dealt a severe blow to Damascus, and which shifted Syria's seat of power to the north for the first time (the Greeks also established Aleppo, which they called Beroea; from its onset, the city was of some importance). The Greeks also imposed their language and religion upon Syria, as they did upon all their Empire; however, these failed to completely replace the Aramaic language and the old religious worship, which continued to flourish outside of the Greek centres.

Syria remained firmly under occidental dominion for quite some time thereafter. The Armenian kingdom conquered Greek Syria in 83 BC, although the Armenians held on to it for only a brief period. Syria was conquered by the Romans, and was made a Roman province in 64 BC; this marked the start of more than 300 years of Syrian administration directly by Imperial Rome.

Syria remained subordinate during this time; however, Antioch was one of the largest and most powerful cities of the Empire (surpassed only by Rome and Byzantium), and as such, it enjoyed a certain level of autonomy. As in the Greek era, Syria continued to be influenced by both the Imperial language (now Latin – although Greek remained more widely-used than Latin in Syria and its neighbours), and by the Imperial religion ("Greco-Roman"); and as in Greek times, this influence continued to grow, but it never completely engulfed Syria.

Syria was also heavily influenced by, and heavily influential in, the birth and early growth of Christianity. From c. 252 AD, Antioch became the home of the world's first organised Christian Church, which later became the Antiochian Orthodox Church (this Church has since moved its headquarters to Damascus). It is said that Paul the Apostle was converted while travelling on the Road to Damascus – thus giving Damascus, too, a significant role in the stories of the New Testament.

From 260 to 273 AD, Syria was controlled by the rebel group of the Roman Empire that governed from Palmyra, a city in central Syria. This rebel government was crushed by the Romans, and Syria and its neighbouring provinces subsequently returned to Roman rule. For the next hundred or so years, the split of the Roman Empire into Western and Eastern halves developed in various stages; until c. 395 AD, when Constantinople (formerly known as Byzantium) officially became the capital of the new Eastern Roman Empire (or "Byzantine Empire"), and Syria (along with its neighbours) became a Byzantine province.

Both the capital (Antioch), and the province of Syria in general, continued to flourish for the next several hundred years of Byzantine rule (including Aleppo, which was second only to Antioch in this era) – until the Muslim conquest of Syria in c. 635 AD, when Antioch fell into a steep decline from which it never recovered. Antioch was finally destroyed c. 1260 AD, thus terminating the final stronghold of Byzantine influence in Syria.

The Muslim conquest of Syria

In 636 AD, the Muslims of Arabia conquered Syria; and Caliph Muawiya I declared Damascus his new home, as well as the capital of the new Islamic world. This marked a dramatic and sudden change for Syria: for the first time in almost 1,000 years, Damascus was re-instated as the seat of power; and, more importantly, Syria was to return to Semitic rule after centuries of Occidental rule.

This also marked the start of Syria's Golden Age: for the first and only time in its history, Syria was to be the capital of a world empire, a serious "mover" and an influencer. Under the Ummayad dynasty, Syria commanded an empire of considerable proportions, stretching all the way from Spain to India. Much of the wealth, knowledge, and strength of this empire flowed directly to the rulers in Damascus.

During the Ummayad Caliphate, Syria was home to an Arab Muslim presence for the first time. The Empire's ruling elite were leading families from Mecca, who moved permanently to Damascus. The conquerors were ultimately the first and the only rulers, in Syria's history, to successfully impose a new language and a new religion on almost the entire populace. However, the conversion of Syria was not an overnight success story: in the early years of the Caliphate, the population of Syria remained predominantly Aramaic- and Greek-speaking, as well as adherents to the old "pagan" religions. It wasn't until many centuries later, that Syria became the majority Arab-speaking, Islam-adherent place that it is today. The fact that Syria being anything other than an "Arab Muslim country" seems far-fetched to a modern reader, is testament to the thoroughness with which the Ummayads and their successors undertook their transformation campaign.

Syria's Golden Age ended in 750 AD, with the Abbasid Dynasty replacing the Ummayads as rulers of the Islamic world, and with the Empire's capital shifting from Damascus to Baghdad. The rest of Syria's history through Medieval times was far from Golden – the formerly prosperous and unified region was divided up and conquered many times over.

A variety of invaders left their mark on Syria in these centuries: Byzantines successfully re-captured much of the north, and even briefly conquered Damascus in 975 AD; the Seljuk Turks controlled much of Syria from Damascus (and Aleppo) c. 1079-1104 AD; Crusaders invaded Syria (along with its neighbours), and caused rampant damage and bloodshed, during the various Crusades that took place throughout the 1100's AD; the Ayyubid Dynasty of Egypt (under Saladin and his successors) intermittently ruled Syria throughout the first half of the 1200's AD; the Mongols attacked Syria numerous times between 1260 and 1300 AD (but failed to conquer Syria or the Holy Land); the Mamluks ruled Syria (from Egypt) for most of the period 1260-1516 AD; and Tamerlane of Samarkand attacked Syria in 1400 AD, sacking both Aleppo and Damascus, and massacring thousands of the inhabitants (before being driven out again by the Mamluks).

It should also be noted that at some point during these turbulent centuries, the Alawite ethnic group and religious sect was born in the north-west of Syria, and quietly grew to dominate the villages of the mountains and the coastal plains near Latakia. The Alawites remained an isolated and secluded rural group until modern times.

These tumultuous and often bloody centuries of Syrian history came to an end in the 1500s, when the Ottoman Turks defeated the Mamluks, and wrested control of Syria and neighbouring territories from them. The subsequent four centuries, under the rule of the Ottoman Empire, marked a welcome period of peace and stability for Syria (in contrast to the devasation of the Crusader and Mongol invasion waves in prior centuries). However, the Ottomans also severely neglected Syria, along with the rest of the Levant, treating the region ever-increasingly as a provincial backwater.

The Ottomans made Aleppo the Syrian capital, thus shifting Syria's power base back to the north after almost nine centuries of Damascus rule (although by this time, Antioch had long been lying in ruins). In the Ottoman period, Aleppo grew to become Syria's largest city (and one of the more important cities of the Empire), far outstripping Damascus in terms of fame and fortune. However, Syria under the Ottomans was an impoverished province of an increasingly ageing empire.

Modern times

The modern world galloped abruptly into Syria on 1 Oct 1918, when the 10th Australian Light Horse Brigade formally accepted the surrender of Damascus by the Ottoman Empire, on behalf of the WWI Allied Forces. The cavalry were shortly followed by the arrival of Lawrence of Arabia, who helped to establish Emir Faisal as the interim leader of a British-backed Syrian government. Officially, from 1918-1920, Syria fell under the British- and French-headed Occupied Enemy Territory Administration.

For the first time since the end of the Ummayad Caliphate, almost 12 centuries prior, Syria became a unified sovereign power again on 7 Mar 1920, when Faisal became king of a newly-declared independent Greater Syria (and as with Caliph Muawiya 12 centuries earlier, King Faisal was also from Mecca). Faisal had been promised Arab independence and governorship by the Allies during WWI, in return for the significant assistance that he and his Arabian brethren provided in the defeat of the Ottomans. However, the Allies failed to live up to their promise: the French successfully attacked the fledgling kingdom; and on 14 Jul 1920, Syria's brief independence ended, and the French Mandate of Syria began its governance. King Faisal was shortly thereafter sent into exile.

Syria had enjoyed a short yet all-too-sweet taste of independence in 1920, for the first time in centuries; and under the French Mandate, the people of Syria demonstrated on numerous occasions that the tasting had left them hungry for permanent self-determination. France, however – which was supposedly filling no more than a "caretaker" role of the region, and which was supposedly no longer a colonial power – consistently crushed Syrian protests and revolts in the Mandate period with violence and mercilessness, particularly in the revolt of 1925.

During the French Mandate, the Alawites emerged as a significant force in Syria for the first time. Embittered by centuries of discrimination and repression under Ottoman rule, this non-Sunni muslim group – along with other minority groups, such as the Druze – were keen to take advantage of the ending of the old status quo in Syria.

Under their governance, the French allowed the north-west corner of Syria – which was at the time known by its Ottoman name, the Sanjuk of Alexandretta – to fall into Turkish hands. This was a major blow to the future Syrian state – although the French hardly cared about Syria's future; they considered the giving-away of the region as a good political move with Turkey. The region was declared the independent Republic of Hatay in 1938; and in 1939, the new state voted to join Turkey as Hatay Province. This region is home to the ruins of Antioch, which was (as discussed earlier) the Syrian capital for almost 1,000 years. It is therefore understandable that Hatay continues to be a thorn in Syrian-Turkish relations to this day.

Syria adopted various names under French rule. From 1922, it was called the "Syrian Federation" for several years; and from 1930, it was called the Syrian Republic. Also, in 1936, Syria signed a treaty of independence from France. However, despite the treaties and the name changes, in reality Syria remained under French Mandate control (including during WWII, first under Vichy French rule and then under Free French rule) until 1946, when the last French troops finally left for good.

Syria has been a sovereign nation (almost) continuously since 17 Apr 1946. However, the first few decades of modern independent Syria were turbulent. Syria experiened numerous changes of government during the 1950s, several of which were considered coups. From 1958-1961, Syria ceded its independence and formed the United Arab Republic with Eygpt; however, this union proved short-lived, and Syria regained its previous sovereignty after the UAR's collapse. Syria has officially been known as the Syrian Arab Republic since re-declaring its independence on 28 Sep 1961.

Syria's government remained unstable for the following decade: however, in 1963, the Ba'ath party took over the nation's rule; and since then, the Ba'ath remain the ruling force in Syria to this day. The Ba'ath themselves experienced several internal coups for the remainder of the decade. Finally, in Nov 1970, then Defence Minister Hafez al-Assad orchestrated a coup; and Syria's government has effectively remained unchanged from that time to the present day. Hafez al-Assad was President until his death in 2000; at which point he was succeeded by his son Bashar al-Assad, who remains President so far amidst Syria's recent return to tumult.

The Assad family is part of Syria's Alawite minority; and for the entire 42-year reign of the Assads, the Alawites have come to dominate virtually the entire top tier of Syria's government, bureaucracy, military, and police force. Assad-ruled Syria has consistently demonstrated favouritism towards the Alawites, and towards various other minority groups (such as the Druze); while flagrantly discriminating against Syria's Sunni Muslim majority, and against larger minority groups (such as the Kurds). Syria's current civil war is, therefore, rooted in centuries-old sectarian bitterness as a highly significant factor.

Modern independent Syria continues its age-old tradition of both being significantly influenced by other world powers, and of exerting an influence of its own (particularly upon its neighbours), in a rather tangled web. Syria has been a strong ally of Russia for the majority of its independent history, in particular during the Soviet years, when Syria was considered to be on the USSR's side of the global Cold War. Russia has provided arms to Syria for many years, and to this day the majority of the Syrian military's weapons arsenal is of Soviet origin. Russia demonstrated its commitment to its longstanding Syrian alliance as recently as last month, when it and China (who acted in support of Russia) vetoed a UN resolution that aimed to impose international sanctions on the Syrian regime.

Syria has also been a friend of Iran for some time, and is considered Iran's closest ally. The friendship between these two nations began in the 1980s, following Iran's Islamic revolution, when Syria supported Iran in the Iran-Iraq War. In the recent crisis, Iran has been the most vocal supporter of the Assad regime, repeatedly asserting that the current conflict in Syria is being artificially exacerbated by US intervention. Some have commented that Iran and Syria are effectively isolated together – that is, neither has any other good friends that it can rely on (even a number of other Arab states, most notably Saudi Arabia, have vocally shunned the regime) – and that as such, the respective Ayatollah- and Alawite-ruled regimes will be allies to the bitter end.

In terms of exerting an influence of its own, the principal example of this in modern Syria is the state's heavy sponsorship of Hezbollah, and its ongoing intervention in Lebanon, on account of Hezbollah among other things. Syria supports Hezbollah for two reasons: firstly, in order to maintain a strong influence within Lebanese domestic politics and power-plays; and secondly, as part of its ongoing conflict with Israel.

Of the five Arab states that attacked Israel in 1948 (and several times again thereafter), Syria is the only one that still has yet to establish a peace treaty with the State of Israel. As such – despite the fact that not a single bullet has been fired between Israeli and Syrian forces since 1973 – the two states are still officially at war. Israel occupied the Golan Heights in the 1967 Six-Day War, and remains in control of the highly disputed territory to this day. The Golan Heights has alternated between Israeli and Syrian rule for thousands of years – evidence suggests that the conflict stretches back as far as Israelite-Aramean disputes three millenia ago – however, the area is recognised as sovereign Syrian territory by the international community today.

The story continues

As I hope my extensive account of the land's narrative demonstrates, Syria is a land that has seen many rulers and many influences come and go, for thousands of years. Neither conflict, nor revolution, nor foreign intervention, are anything new for Syria.

The uprising against Syria's ruling Ba'ath government began almost 18 months ago, and it has become a particularly brutal and destructive conflict in recent months. It seems unlikely that Syria's current civil war will end quickly – on the contrary, it appears to be growing ever-increasingly drawn-out, at this stage. Various international commentators have stated that it's "almost certain" that the Assad regime will ultimately fall, and that it's now only a matter of time. Personally, I wouldn't be too quick to draw such conclusions – as my historical investigations have revealed, Syria is a land of surprises, as well as a land where webs of interrelations stretch back to ancient times.

The Assad regime was merely the latest chapter in Syria's long history; and whatever comes next, will merely be the land's following chapter. For a land that has witnessed the rise and fall of almost every major empire in civilised history; that has seen languages, religions, and ethnic groups make their distinctive mark and leave their legacy; for such a land as this, the current events – dramatic, tragic, and pivotal though they may be – are but a drop in the ocean.

]]>
Israel's new Law of Return https://greenash.net.au/thoughts/2012/06/israels-new-law-of-return/ Mon, 18 Jun 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/06/israels-new-law-of-return/ Until a few days ago, I had no idea that Israel is home to an estimated 60,000 African refugees, the vast majority of whom come from South Sudan or from Eritrea, and almost all of whom have arrived within the past five years or so. I was startled as it was, to hear that so many refugees have arrived in Israel in such a short period; but I was positively shocked, when I then discovered that Israel plans to deport them, commencing immediately. The first plane to Juba, the capital of South Sudan, left last night.

South Sudan is the world's newest nation – it declared its independence on 9 Jul 2011. Israel was one of the first foreign nations to establish formal diplomatic ties with the fledgling Republic. Subsequently, Israel wasted no time in announcing publicly that all South Sudanese refugees would soon be required to leave; they were given a deadline of 31 Mar 2012, and were informed that they would be forcibly deported if still in Israel after that date.

Israel claims that, since having gained independence, it is now safe for South Sudanese nationals to return home. However, independent critics rebuke this, saying that there is still significant armed conflict between Sudan, South Sudan, and numerous rebel groups in the region. Aside from the ongoing security concerns, South Sudan is also one of the world's poorest and least-developed countries; claiming that South Sudan is ready to repatriate its people, is a ridiculous notion at best.

Israel helped formulate the UN Refugee Convention of 1951. This was in the aftermath of the Holocaust, an event in which millions of Jewish lives could have been saved, had the rest of the world accepted more European Jews as refugees. Israel, of course, is itself one of the world's most famous "refugee nations", as the majority of the nation's founders were survivors of Nazi persecution in Europe, seeking to establish a permanent homeland where Jews could forevermore seek shelter from oppression elsewhere.

It's ironic, therefore, that Israel – of all nations – until recently had no formal policy regarding asylum seekers, nor any formal system for managing an influx of asylum seekers. (And I thought Australia's handling of asylum seekers was bad!) To this day, Israel's immigration policy consists almost entirely of the Law of Return, which allows any Jew to immigrate to the country hassle-free.

Well, it seems to me that this law has recently been amended. For Jewish refugees, the Law is that you can Return to Israel (no matter what). For non-Jews, the Law is that you're forced to Return from Israel, back to wherever you fled from. Couldn't get much more double standards than that!

Irony and hypocrisy

Many Israelis are currently up in arms over the African migrants that have "infiltrated" the country. Those Israelis obviously have very short memories (and a very poor grasp of irony). After all, it was only 21 years ago, in 1991, when Operation Solomon resulted in the airlifting of almost 15,000 black Africans from Ethiopia to Israel, as a result of heightened security risks for those people in Ethiopia. Today, over 120,000 Ethiopian Jews (African-born and Israeli-born) live in Israel.

Apparently, that's quite acceptable – after all, they were Jewish black Africans. As such, they were flown from Africa to Israel, courtesy of the State, and were subsequently welcomed with open arms. It seems that for non-Jewish black Africans (in this case, almost all of them are Christians), the tables get turned – they get flown from Israel back to Africa; and they're even given a gift of €1,000 per person, in the hope that they go away and stay away.

Oh, and in case the historical parallels aren't striking enough: the home countries of these refugees – South Sudan and Eritrea – happen to both be neighbouring Ethiopia (in fact, Operations Moses and Joshua, the precursors to Operation Solomon, involved airlifting Ethiopian Jewish refugees from airstrips within Sudan – whether modern-day South Sudan or not, is uncertain).

It's also a historical irony, that these African refugees are arriving in Israel on foot, after crossing the Sinai desert and entering via Egypt. You'd think that we Jews would have more compassion for those making an "exodus" from Egypt. However, if Israel does feel any compassion towards these people, it certainly has a strange way of demonstrating it: Israel is currently in the process of rapidly constructing a new fence along the entire length of its desert border with Egypt, the primary purpose of which is to stop the flow of illegal immigrants that cross over each year.

It's quite ironic, too, that many of the African refugees who arrive in Israel are fleeing religious persecution. After all, was the modern State of Israel not founded for exactly this purpose – to provide safe haven to those fleeing discrimination elsewhere in the world, based on their religious observance? And, after all, is it not logical that those fleeing such discrimination should choose to seek asylum in the Holy Land? Apart from South Sudan, a large number of the recent migrants are from Eritrea, a country that has banned all religious freedom, and that has the world's lowest Press Freedom Index rating in the world (179th, lower even than North Korea).

Much ado about nothing

Israel is a nation that lives in fear of many threats. The recent arrival of African refugees has been identified by many Israelis (and by the government) as yet another threat, and as such, the response has been one of fear. Israel fears that these "infiltrators" will increase crime on the nation's streets. It fears that they will prove an economic burden. And it fears that they will erode the Jewish character of the State.

These fears, in my opinion, are actually completely unfounded. On the contrary, Israel's fear of the new arrivals is nothing short of ridiculous. The refugees will not increase crime in Israel; they will not prove an economic burden; and (the issue that worries Israel most of all) they will not erode the Jewish character of the state.

As recent research has shown, humanitarian immigrants in general make a significant positive contribution to their new home country; this is a contribution that is traditionally under-estimated, or even refuted altogether. Refugees, if welcomed and provided with adequate initial support, are people who desire to, and who in most cases do, contribute back to their new host country. They're desperately trying to escape a life of violence and poverty, in order to start anew; if given the opportunity to fulfil their dream, they generally respond gratefully.

Israel is a new player in the field of humanitarian immigration (new to ethnically-agnostic humanitarian immigration, at least). I can only assume that it's on account of this lack of experience, that Israel is failing to realise just how much it has to gain, should it welcome these refugees. If welcomed warmly and given citizenship, the majority of these Africans will support Israel in whatever way Israel asks them to. Almost all of them will learn Hebrew. A great number will join the IDF. And quite a few will even convert to Judaism. In short, these immigrants could prove to be just the additional supporters of the Jewish status quo that Israel needs.

What is Israel's biggest fear in this day and age? That the nation's Arab / Palestinian population is growing faster than its Jewish population; and that in 20 years' time, the Jews will be voted out of their own State by an Arab majority. As such, what should Israel be actively trying to do? It's in Israel's interests to actively encourage any immigration that contributes people / votes to the Jewish side of the equation. And, in my opinion, if Israel were to accept these African refugees with open arms today, then in 20 years' time they would be exactly the additional people / votes that the status quo requires.

Finally, as many others have already stated: apart from being ironic, hypocritical, impractical, and (most likely) illegal, Israel's current policy towards its African refugees is inhumane. As a Jew myself, I feel ashamed and incredulous that Israel should behave in this manner, when a group of desperate and abandoned people comes knocking at its doorstep. It is an embarrassment to Jews worldwide. We of all people should know better and act better.

]]>
Introducing the Drupal Handy Block module https://greenash.net.au/thoughts/2012/06/introducing-the-drupal-handy-block-module/ Fri, 08 Jun 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/06/introducing-the-drupal-handy-block-module/ I've been noticing more and more lately, that for every new Drupal site I build, I define a lot of custom blocks. I put the code for these blocks in one or more custom modules, and most of them are really simple. For me, at least, the most common task that these blocks perform, is to display one or more fields of the node (or other entity) page currently being viewed; and in second place, is the task of displaying a list of nodes from a nodequeue (as I'm rather a Nodequeue module addict, I tend to have nodequeues strewn all over my sites).

In short, I've gotten quite bored of copy-pasting the same block definition code over and over, usually with minimal changes. I also feel that such simple block definitions don't warrant defining a new custom module – as they have zero interesting logic / functionality, and as their purpose is purely presentational, I'd prefer to define them at the theme level. Additionally, every Drupal module has both administrative overhead (need to install / enable it on different environments, need to manage its deployment, etc), and performance overhead (every extra PHP include() call involves opening and reading a new file from disk, and every enabled Drupal module is a minimum of one extra PHP file to be included); so, less enabled modules means a faster site.

To make my life easier – and the life of anyone else in the same boat – I've written the Handy Block module. (As the project description says,) if you often have a bunch of custom modules on your site, that do nothing except implement block hooks (along with block callback functions), for blocks that do little more than display some fields for the entity currently being viewed, then Handy Block should… well, it should come in handy! You'll be able to do the same thing in just a few lines of your template.php file; and then, you can delete those custom modules of yours altogether.

The custom module way

Let me give you a quick example. Your page node type has two fields, called sidebar_image and sidebar_text. You'd like these two fields to display in a sidebar block, whenever they're available for the page node currently being viewed.

Using a custom module, how would you achieve this?

First of all, you have to build the basics for your new custom module. In this case, let's say you want to call your module pagemod – you'll need to start off by creating a pagemod directory (in, for example, sites/all/modules/custom), and writing a pagemod.info file that looks like this:

name = Page Mod
description = Custom module that does bits and pieces for page nodes.
core = 7.x
files[] = pagemod.module

You'll also need an almost-empty pagemod.module file:

<?php

/**
 * @file
 * Custom module that does bits and pieces for page nodes.
 */

Your module now exists – you can enable it if you want. Now, you can start building your sidebar block – let's say that you want to call it sidebar_snippet. First off, you need to tell Drupal that the block exists, by implementing hook_block_info() (note: this and all following code goes in pagemod.module, unless otherwise indicated):

<?php
/**
 * Implements hook_block_info().
 */
function pagemod_block_info() {
  $blocks['sidebar_snippet']['info'] = t('Page sidebar snippet');
  return $blocks;
}

Next, you need to define what gets shown in your new block. You do this by implementing hook_block_view():

<?php
/**
 * Implements hook_block_view().
 */
function pagemod_block_view($delta = '') {
  switch ($delta) {
    case 'sidebar_snippet':
      return pagemod_sidebar_snippet_block();
  }
}

To keep things clean, it's a good idea to call a function for each defined block in hook_block_view(), rather than putting all your code directly in the hook function. Right now, you only have one block to render; but before you know it, you may have fifteen. So, let your block do its stuff here:

<?php
/**
 * Displays the sidebar snippet on page nodes.
 */
function pagemod_sidebar_snippet_block() {
  // Pretend that your module also contains this function - for code
  // example, see handyblock_get_curr_page_node() in handyblock.module.
  $node = pagemod_get_curr_page_node();
  if (empty($node->nid) || !($node->type == 'page')) {
    return;
  }

  if (!empty($node->field_sidebar_image['und'][0]['uri'])) {
    // Pretend that your module also contains this function - for code
    // example, see tpl_field_vars_styled_image_url() in
    // tpl_field_vars.module
    $image_url = pagemod_styled_image_url($node->field_sidebar_image
                                          ['und'][0]['uri'],
                                          'sidebar_image');

    $body = '';
    if (!empty($node->field_sidebar_text['und'][0]['safe_value'])) {
      $body = $node->field_sidebar_text['und'][0]['safe_value'];
    }

    $block['content'] = array(
      '#theme' => 'pagemod_sidebar_snippet',
      '#image_url' => $image_url,
      '#body' => $body,
    );

    return $block;
  }
}

Almost done. Drupal now recognises that your block exists, which means that you can enable your block and assign it to a region on the administer -> structure -> blocks page. Drupal will execute the code you've written above, when it tries to display your block. However, it won't yet display anything much, because you've defined your block as having a custom theme function, and that theme function hasn't been written yet.

Because you're an adherent of theming best practices, and you like to output all parts of your page using theme templates rather than theme functions, let's register this themable item, and let's define it as having a template:

<?php
/**
 * Implements hook_theme().
 */
function pagemod_theme() {
  return array(
    'pagemod_sidebar_snippet' => array(
      'variables' => array(
        'image_url' => NULL,
        'body' => NULL,
      ),
      'template'  => 'pagemod-sidebar-snippet',
    ),
  );
}

And, as the final step, you'll need to create a pagemod-sidebar-snippet.tpl.php file (also in your pagemod module directory), to actually output your block:

<img src="<?php print $image_url; ?>" id="sidebar-snippet-image" />

<?php if (!empty($body)): ?>
<div id="sidebar-snippet-body-wrapper">
  <?php print $body; ?>
</div><!-- /#sidebar-snippet-body-wrapper -->
<?php endif; ?>

Give your Drupal cache a good ol' clear, and voila – it sure took a while, but you've finally got your sidebar block built and displaying.

The Handy Block way

Now, to contrast, let's see how you'd achieve the same result, using the Handy Block module. No need for any of the custom pagemod module stuff above. Just enable Handy Block, and then place this code in your active theme's template.php file:

<?php
/**
 * Handy Block theme callback implementation.
 */
function MYTHEME_handyblock() {
  return array(
    'sidebar_snippet' => array(
      'block_info' => t('MYTHEME sidebar snippet'),
      'handyblock_context' => 'curr_page_node',
      'theme_variables' => array(
        'image_url',
        'body',
      ),
    ),
  );
}

/**
 * Handy Block alter callback for block 'sidebar_snippet'.
 */
function MYTHEME_handyblock_sidebar_snippet_alter(&$block, $context) {
  $node = $context['node'];
  $vars = tpl_field_vars($node);
  if (empty($vars['sidebar_image'])) {
    $block = NULL;
    return;
  }

  $block['content']['#image_url'] = $vars['sidebar_image']
                                         ['sidebar_image_url'];
  if (!empty($vars['sidebar_text'])) {
    $block['content']['#body'] = $vars['sidebar_text'];
  }
}

The MYTHEME_handyblock() callback automatically takes care of all three of the Drupal hook implementations that you previously had to write manually: hook_block_info(), hook_block_view(), and hook_theme(). The MYTHEME_handyblock_BLOCKNAME_alter() callback lets you do whatever you want to your block, after automatically providing the current page node as context, and setting the block's theme callback (in this case, the callback is controlling the block's visibility based on whether an image is available or not; and it's populating the block with the image and text fields).

(Note: the example above also makes use of Template Field Variables, to make the code even more concise, and even easier to read and to maintain – for more info, see my previous article about Template Field Variables).

Handy Block has done the "paperwork" (i.e. the hook implementations), such that Drupal expects a handyblock-sidebar-snippet.tpl.php file for this block (in your active theme's directory). So, let's create one (looks the same as the old pagemod-sidebar-snippet.tpl.php template):

<img src="<?php print $image_url; ?>" id="sidebar-snippet-image" />

<?php if (!empty($body)): ?>
<div id="sidebar-snippet-body-wrapper">
  <?php print $body; ?>
</div><!-- /#sidebar-snippet-body-wrapper -->
<?php endif; ?>

After completing these steps, clear your Drupal cache, and assign your block to a region – and hey presto, you've got your custom block showing. Only this time, no custom module was needed, and significantly fewer lines of code were written.

In summary

Handy Block is not rocket science. (As the project description says,) this is a convenience module, for module developers and for themers. All it really does, is automate a few hook implementations for you. By implementing the Handy Block theme callback function, Handy Block implements hook_theme(), hook_block_info(), and hook_block_view() for you.

Handy Block is for Drupal site builders, who find themselves building a lot of blocks that:

  • Display more than just static text (if that's all you need, just use the 'add block' feature in the Drupal core block module)
  • Display something which is pretty basic (e.g. fields of the node currently being viewed), but which does require some custom code (albeit code that doesn't warrant a whole new custom module on your site)
  • Require a custom theme template

I should also mention that, before starting work on Handy Block, I had a look around for similar existing Drupal modules, and I found two interesting candidates. Both can be used to do the same thing that I've demonstrated in this article; however, I decided to go ahead and write Handy Block anyway, and I did so because I believe Handy Block is a better tool for the job (for the target audience that I have in mind, at least). Nevertheless, I encourage you to have a look at the competition as well.

The first alternative is CCK Blocks. This module lets you achieve similar results to Handy Block – however, I'm not so keen on it for several reasons: all its config is through the Admin UI (and I want my custom block config in code); it doesn't let you do anything more than output fields of the entity currently being viewed (and I want other options too, e.g. output a nodequeue); and it doesn't allow for completely custom templates for each block (although overriding its templates would probably be adequate in many cases).

The second alternative is Bean. I'm actually very impressed with what this module has to offer, and I'm hoping to take it for a spin sometime soon. However, for me, it seems that the Bean module is too far in the opposite extreme (compared to CCK Blocks) – whereas CCK blocks is too "light" and only has an admin UI for configuration, the Bean module is too complicated for simple use cases, as it requires implementing no small amount of code, within some pretty complex custom hooks. I decided against using Bean, because: it requires writing code within custom modules (not just at the theme layer); it's designed for things more complicated than just outputting fields of the entity currently being viewed (e.g. for performing custom Entity queries in a block, but without the help of Views); and it's above the learning curve of someone who primarily wears a Drupal themer hat.

Apart from the administrative and performance benefits of defining custom blocks in your theme's template.php file (rather than in a custom module), doing all the coding at the theme level also has another advantage. It makes custom block creation more accessible to people who are primarily themers, and who are reluctant (at best) module developers. This is important, because those big-themer-hat, small-developer-hat people are the primary target audience of this module (with the reverse – i.e. big-developer-hat, small-themer-hat people – being the secondary target audience).

Such people are scared and reluctant to write modules; they're more comfortable sticking to just the theme layer. Hopefully, this module will make custom block creation more accessible, and less daunting, for such people (and, in many cases, custom block creation is a task that these people need to perform quite often). I also hope that the architecture of this module – i.e. a callback function that must be implemented in the active theme's template.php file, not in a module – isn't seen as a hack or as un-Drupal-like. I believe I've justified fairly thoroughly, why I made this architecture decision.

I also recommend that you use Template Field Variables in conjunction with Handy Block (see my previous article about Template Field Variables). Both of them are utility modules for themers. The idea is that, used stand-alone or used together, these modules make a Drupal themer's life easier. Happy theming, and please let me know your feedback about the module.

]]>
Introducing the Drupal Template Field Variables module https://greenash.net.au/thoughts/2012/05/introducing-the-drupal-template-field-variables-module/ Tue, 29 May 2012 00:00:00 +0000 https://greenash.net.au/thoughts/2012/05/introducing-the-drupal-template-field-variables-module/ Drupal 7's new Field API is a great feature. Unfortunately, theming an entity and its fields can be quite a daunting task. The main reason for this, is that the field variables that get passed to template files are not particularly themer-friendly. Themers are HTML markup and CSS coders; they're not PHP or Drupal coders. When themers start writing their node--page.tpl.php file, all they really want to know is: How do I output each field of this page [node type], exactly where I want, and with minimal fuss?

It is in the interests of improving the Drupal Themer Experience, therefore, that I present the Template Field Variables module. (As the project description says,) this module takes the mystery out of theming fieldable entities. For each field in an entity, it extracts the values that you actually want to output (from the infamous "massive nested arrays" that Drupal provides), and it puts those values in dead-simple variables.

What we've got

Let me tell you a story, about an enthusiastic fledgling Drupal themer. The sprightly lad has just added a new text field, called byline, to his page node type in Drupal 7. He wants to output this field at the bottom of his node--page.tpl.php file, in a blockquote tag.

Using nothing but Drupal 7 core, how does he do it?

He's got two options. His first option — the "Drupal 7 recommended" option — is to use the Render API, to hide the byline from the spot where all the node's fields get outputted by default; and to then render() it further down the page.

Well, says the budding young themer, that sure sounds easy enough. So, the themer goes and reads up on how to use the Render API, finds the example snippets of hide($content['bla']); and print render($content['bla']);, and whips up a template file:

<?php
/* My node--page.tpl.php file. It rocks. */
?>

<?php // La la la, do some funky template stuff. ?>

<?php // Don't wanna show this in the spot where Drupal vomits
      // out content by default, let's call hide(). ?>
<?php hide($content['field_byline']); ?>

<?php // Now Drupal can have a jolly good ol' spew. ?>
<?php print render($content); ?>

<?php // La la la, more funky template stuff. ?>

<?php // This is all I need in order to output the byline at the
      // bottom of the page in a blockquote, right? ?>
<blockquote><?php print render($content['field_byline']); ?></blockquote>
 

Now, let's see what page output that gives him:

<!-- La la la, this is my page output. -->

<!-- La la la, Drupal spewed out all my fields here. -->

<!-- La la... hey!! What the..?! Why has Drupal spewed out a -->
<!-- truckload of divs, and a label, that I didn't order? -->
<!-- I just want the byline, $#&%ers!! -->
<blockquote><div class="field field-name-field-byline field-type-text field-label-above"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even">It&#039;s hip to be about something</div></div></div></blockquote>
 

Our bright-eyed Drupal theming novice was feeling pretty happy with his handiwork so far. But now, disappointment lands. All he wants is the actual value of the byline. No div soup. No random label. He created a byline field. He saved a byline value to a node. Now he wants to output the byline, and only the byline. What more could possibly be involved, in such a simple task?

He racks his brains, searching for a solution. He's not a coder, but he's tinkered with PHP before, and he's pretty sure it's got some thingamybob that lets you cut stuff out of a string that you don't want. After a bit of googling, he finds the code snippets he needs. Ah! He exclaims. This should do the trick:

<?php // I knew I was born to be a Drupal ninja. Behold my
      // marvellous creation! ?>
<blockquote><?php print str_replace('<div class="field field-name-field-byline field-type-text field-label-above"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even">', '', str_replace('</div></div></div>', '', render($content['field_byline']))); ?></blockquote>
 

Now, now, Drupal veterans – don't cringe. I know you've all seen it in a real-life project. Perhaps you even wrote it yourself, once upon a time. So, don't be too quick to judge the young grasshopper harshly.

However, although the str_replace() snippet does indeed do the trick, even our newbie grasshopper recognises it for the abomination and the kitten-killer that it is, and he cannot live knowing that a git blame on line 47 of node--page.tpl.php will forever reveal the awful truth. So, he decides to read up a bit more, and he finally discovers that the recommended solution is to create your own field.tpl.php override file. So, he whips up a one-line field--field-byline.tpl.php file:

<?php print render($item); ?>
 

And, at long last, he's got the byline and just the byline outputting… and he's done it The Drupal Way!

The newbie themer begins to feel more at ease. He's happy that he's learnt how to build template files in a Drupal 7 theme, without resorting to hackery. To celebrate, he snacks on juicy cherries dipped in chocolate-flavoured custard.

But a niggling concern remains at the back of his mind. Perhaps what he's done is The Drupal Way, but he's still not convinced that it's The Right Way. It seems like a lot of work — calling hide(); in one spot, having to call print render(); (not just print) further down, having to override field.tpl.php — and all just to output a simple little byline. Is there really no one-line alternative?

Ever optimistic, the aspiring Drupal themer continues searching, until at last he discovers that it is possible to access the raw field values from a node template. And so, finally, he settles for a solution that he's more comfortable with:

<?php
/* My node--page.tpl.php file. It rocks. */
?>

<?php // La la la, do some funky template stuff. ?>

<?php // Still need hide(), unless I manually output all my node fields,
// and don't call print render($content);
// grumble grumble... ?>
<?php hide($content['field_byline']); ?>

<?php // Now Drupal can have a jolly good ol' spew. ?>
<?php print render($content); ?>

<?php // La la la, more funky template stuff. ?>

<?php // Yay - I actually got the raw byline value to output here! ?>
<blockquote><?php print check_plain($node->field_byline[$node->language][0]['value']); ?></blockquote>
 

And so the sprightly young themer goes on his merry way, and hacks up .tpl.php files happily ever after.

Why all that sucks

That's the typical journey of someone new to Drupal theming, and/or new to the Field API, who wants to customise the output of fields for an entity. It's flawed for a number of reasons:

  • We're making themers learn how to make function calls unnecessarily. It's OK to make them learn function calls if they need to do something fancy. But in the case of the Render API, they need to learn two – hide() and render() – just to output something. All they should need to know is print.
  • We're making themers understand a complex, unnecessary, and artificially constructed concept: the Render API. Themers don't care how Drupal constructs the page content, they don't care what render arrays are (or if they exist); and they shouldn't have to care.
  • We're making it unnecessarily difficult to output raw values, using the recommended theming method (i.e. using the Render API). In order to output raw values using the render API, you basically have to override field.tpl.php in the manner illustrated above. This will prove to be too advanced (or simply too much effort) for many themers, who may resort to the type of string-replacement hackery described above.
  • The only actual method of outputting the raw value directly is fraught with problems:
    • It requires a long line of code, that drills deep into nested arrays / objects before it can print the value
    • Those nested arrays / objects are hard even for experienced developers to navigate / debug, let alone newbie themers
    • It requires themers to concern themselves with field translation and with the i18n API
    • Guesswork is needed for determining the exact key that will yield the outputtable value, at the end of the nested array (usually 'value', but sometimes not, e.g. 'url' for link fields)
    • It's highly prone to security issues, as there's no way that novice themers can be expected to understand when to use 'value' vs 'safe_value', when check_plain() / filter_xss_admin() should be called, etc. (even experienced developers often misuse or omit Drupal's string output security, as anyone who's familiar with the Drupal security advisories would know)

In a nutshell: the current system has too high a learning curve, it's unnecessarily complex, and it unnecessarily exposes themers to security risks.

A better way

Now let me tell you another story, about that same enthusiastic fledgling Drupal themer, who wanted to show his byline in a blockquote tag. This time, he's using Drupal 7 core, plus the Template Field Variables module.

First, he opens up his template.php file, and adds the following:

/**
 * Preprocessor for node.tpl.php template file.
 */
function foobar_preprocess_node(&$vars) {
  tpl_field_vars_preprocess($vars, $vars['node'], array(
    'cleanup' => TRUE,
    'debug' => TRUE,
  ));
}

After doing this (and after clearing his cache), he opens up his node (of type 'page') in a browser; and because he's set 'debug' => TRUE (above), he sees this output on page load:

$body =

<p>There was a king who had twelve beautiful daughters. They slept in
twelve beds all in one room; and when they went to bed, the doors were
shut and locked up; but every morning their shoes were found to be
quite worn through as if they had been danced in all night; and yet
nobody could find out how it happened, or where they had been.</p>
<p>Then the king made it known to all the land, that if any person
could discover the secret, and find out where it was that the
princesses danced in the night, he should have the one he liked best
for his wife, and should be king after his ...

$byline =

It's hip to be about something

And now, he has all the info he needs in order to write his new node--page.tpl.php file, which looks like this:

<?php
/* My node--page.tpl.php file. It rocks. */
?>

<?php // La la la, do some funky template stuff. ?>

<?php // No spewing, please, Drupal - just the body field. ?>
<?php print $body; ?>

<?php // La la la, more funky template stuff. ?>

<?php // Output the byline here, pure and simple. ?>
<blockquote><?php print $byline; ?></blockquote>
 

He sets 'debug' => FALSE in his template.php file, he reloads the page in his browser, and… voila! He's done theming for the day.

About the module

The story that I've told above, describes the purpose and function of the Template Field Variables module better than a plain description can. (As the project description says,) it's a utility module for themers. Its only purpose is to make Drupal template development less painful. It has no front-end. It stores no data. It implements no hooks. In order for it to do anything, some coding is required, but only coding in your theme files.

I've illustrated here the most basic use case of Template Field Variables, i.e. outputting simple text fields. However, the module's real power lies in its ability to let you print out the values of more complex field types, just as easily. Got an image field? Want to print out the URL of the original-size image, plus the URLs of any/all of the resized derivatives of that image… and all in one print statement? Got a date field, and want to output the 'start date' and 'end date' values with minimal fuss? Got a nodereference field, and want to output the referenced node's title within an h3 tag? Got a field with multiple values, and want to loop over those values in your template, just as easily as you output a single value? For all these use cases, Template Field Variables is your friend.

If you never want to again see a template containing:<