Rusted Ruby

I gave a talk this week! If you’re interested in a 10,000m look at Rust and then a delve into Helix, then here’s your slides!

Rusted Ruby from Ian Pointer

The talk suffered a bit from my usual deer-in-headlights presenting style and that I lost a week of preparation due to wisdom teeth shenanigans, but hey, I implemented a 3x speedup of Ruby’s Pathname.absolute? in less than ten minutes, which isn’t too bad…

Going to a busy six weeks coming up - as of yesterday, I have trips to DC, Cincinnati, and of course Bicester and London before the end of August. And then…and then, things get even more fraught as I begin my 2018 masterplan. Which is much less impressive than it sounds, but will involve quite a bit of upheaval. But all for the better in the end!

Hopefully, I will sleep a little more this upcoming week, too. And fewer exam stress dreams set back in Manchester. It’s been 17 years…

Finally...Tea!

I can drink hot and carbonated drinks again. Which is good, because I seem to have bought all of the Diet Coke in Durham during the past two weeks of Target sales. I may have miscalculated how much even I can drink, but you can’t beat 4 cases of 12 cans for $8.88, can you?

Other than that…it has been a quiet week. I mean, really quiet. The most exciting thing of the has likely been the discovery of 1980’s Fox. Come back next week for more exciting updates!

The Colour Problem - Neural Networks, the BBC, and the Rag-and-bone Man

tl;dr - a fully convolutional neural network trained to colourize surviving black-and-white recordings of colour Steptoe and Son episodes.

One of the most infamous parts of the BBC’s history was its practice of wiping old episodes of TV programmes throughout the years to re-use videotape. The most famous of these is, of course, Doctor Who, but it was far from the only programme to suffer in this way. Many episodes of The Likely Lads, Dad’s Army, and Steptoe And Son have been lost.

Every so often, an off-air recording is found. Recorded at the time of broadcast, these can be a wonderful way of plugging the archive gaps (who said that piracy is always a crime, eh?). If the BBC is lucky, then a full colour episode is recovered (if it was broadcast in colour). More often for the older shows, however, it’s likely that the off-air recording is in black and white. Even here, though, the BBC and the Restoration Team has been able to work magic. If a BW recording has colour artifacts and noise in the signal (known as chromadots), then the colour can be restored to the image (this was performed on an episode of Dad’s Army, ‘Room At The Bottom’).

If we don’t have the chromadots, we’re out of luck. Or are we? A couple of months ago, I saw Jeremy Howard showing off a super-resolution neural net. To his surprise, one of his test pictures didn’t just come out larger; the network had corrected the colour balance in the image as well. A week later, I was reading a comedy forum which offhandedly joked about the irony of ‘The Colour Problem’ being an episode of Steptoe and Son that only existed in a b/w recording…and I had an idea.

Bring out your data!

Most image-related neural networks are trained on large datasets, e.g. ImageNet, or CoCo. I could have chosen to take a pre-trained network like VGG or Inception and adapted it to my own needs. But…after all, the show was a classic 60s/70s BBC sitcom production - repeated uses of sets, outside 16mm film, inside video, etc. So I wondered: would it make sense to train a neural network on existing colour episodes and then get it to colourize based on what it had learnt from them?1

All I needed was access to the colour episodes and ‘The Colour Problem’ itself. In an ideal world, at this point I would have pulled out DVDs or Blu-Rays and taken high-quality images from those. As I’m not exactly a fan of the show…I don’t have any of those. But what I did have was a bunch of YouTube links, a downloader app, and ffmpeg. It wasn’t going to be perfect, but it’d do for now.

In order to train the network, what I did was to produce a series of stills from each colour episode in two formats - colour and b/w. The networks I created would train by altering the b/w images and using the colour stills as ‘labels’ to compare against and update the network as required.

For those of you interested, here’s the two ffmpeg commands that did this:

    ffmpeg -i 08_07.mp4 -vf scale=320:240 train/08_07_%06d.jpg
    ffmpeg -i 08_07.mp4  -vf format=gray traingray/08_07_%06d.png

I was now armed with 650,000 still images of the show. I never thought my work would end up with me having over half-a-million JPGs of Steptoe and Son on my computer, but here we are.

But First on BBC 1 Tonight

Having got hold of all that data, I then took a random sample of 5000 images from the 600,000. Why? Because it can be useful to work on a sample of the dataset before launching into the whole lot.

As training takes a lot less time than on the entire dataset, not only does this allow you to spot mistakes much quicker than if you were training on everything, but it can be great for getting a ‘feel’ for the data and what architectures might or might not be useful.

FSRCNN

Normally, if I was playing with a dataset for the first time, I’d likely start with a couple of fully-connected layers - but in this case, I knew I wanted to start with something like the super-resolution architecture I had seen a few weeks ago. I had a quick Google and found FSRCNN, a fully-convolutional network architecture designed for scaling up images.

What happens in FSRCNN is that the image is exposed to a series of convolutional layers which reduces the image’s complexity down to a much smaller size, whereupon another set of convolutional layers operate on that smaller data. Finally, everything goes through de-convolutional layers to scale the image back up and then to the required (larger!) size.

Here’s a look at the architecture, visualized with Keras’s SVG model rendering:

G 139862606661712 batch_normalization_2_input: InputLayer 139862606661328 batch_normalization_2: BatchNormalization 139862606661712->139862606661328 139862606661136 feature_extraction: Conv2D 139862606661328->139862606661136 139862605102608 batch_normalization_3: BatchNormalization 139862606661136->139862605102608 139862605102288 create_channels: Conv2D 139862605102608->139862605102288 139862606233552 batch_normalization_4: BatchNormalization 139862605102288->139862606233552 139862604127120 shrinking1: Conv2D 139862606233552->139862604127120 139862604465744 batch_normalization_5: BatchNormalization 139862604127120->139862604465744 139862603741328 mapping1: Conv2D 139862604465744->139862603741328 139862593615760 batch_normalization_6: BatchNormalization 139862603741328->139862593615760 139862592468624 mapping2: Conv2D 139862593615760->139862592468624 139862591437776 batch_normalization_7: BatchNormalization 139862592468624->139862591437776 139862591289104 mapping3: Conv2D 139862591437776->139862591289104 139862591688080 batch_normalization_8: BatchNormalization 139862591289104->139862591688080 139862590978320 mapping4: Conv2D 139862591688080->139862590978320 139862337313552 batch_normalization_9: BatchNormalization 139862590978320->139862337313552 139862336358672 channels: Conv2D 139862337313552->139862336358672 139862335763088 batch_normalization_10: BatchNormalization 139862336358672->139862335763088 139862334683984 expand: UpSampling2D 139862335763088->139862334683984

(The idea of shrinking your data, operating on that instead and then scaling it back up is a common one in neural networks)

I made some modifications to the FSRCNN architecture. Firstly, I wanted the output image to have the same scale as the input rather than making it bigger. Plus, I altered things to take the input with only one channel (greyscale), but to produce an RGB 3-channel picture.

Armed with this model, I ran a training set…and…got a complete mess.

mess

Well, that worked great, didn’t it? sigh

Colour Spaces - From RGB to CIELAB

As I returned to the drawing board, I wondered about my decision to convert from greyscale to RGB. It felt wrong. I had the greyscale data, but I was essentially throwing that away and making the network generate 3 channels of data from scratch. Was there a way I could instead recreate the effect of the chromadots and add it to the original greyscale information? That way, I’d only be generating two channels of new synthetic data and combining it with reality. It seemed worth exploring.

The answer seemed to be found in the CIELAB colour space. In this space, the L co-ordinate represents lightness, a* is a point between red/magenta and green, and b* is a point between yellow and blue. I had the L co-ordinates in my greyscale image - I just had to generate the a* and b* co-ordinates for each image and then combine them with the original L. Simple!

Other Colorization Models Are Available

While I was doing that research, though, I also stumbled on a paper called Colorful Image Colorization. This paper seemed to confirm my choice of moving to the CIELAB colour space and also provided a similar architecture to FSRCNN, but with more filters running on the scaled-down image. I blended the two architectures together with Keras, as I wasn’t entirely convinced by the paper’s method of choosing colours via quantized bins and a generated probability distribution.

Here’s what my architecture looked like at this point:

G 139862329103696 input_30: InputLayer 139862329102864 conv2d_133: Conv2D 139862329103696->139862329102864 139862332643408 concatenate_18: Concatenate 139862329103696->139862332643408 139862328902288 conv2d_134: Conv2D 139862329102864->139862328902288 139862328485264 batch_normalization_72: BatchNormalization 139862328902288->139862328485264 139862328560400 conv2d_135: Conv2D 139862328485264->139862328560400 139862329251280 conv2d_136: Conv2D 139862328560400->139862329251280 139862332827408 batch_normalization_73: BatchNormalization 139862329251280->139862332827408 139862327147344 conv2d_137: Conv2D 139862332827408->139862327147344 139862326933712 conv2d_138: Conv2D 139862327147344->139862326933712 139862326870096 conv2d_139: Conv2D 139862326933712->139862326870096 139862326747024 batch_normalization_74: BatchNormalization 139862326870096->139862326747024 139862325312336 conv2d_140: Conv2D 139862326747024->139862325312336 139862325098704 conv2d_141: Conv2D 139862325312336->139862325098704 139862325035088 conv2d_142: Conv2D 139862325098704->139862325035088 139862324932496 batch_normalization_75: BatchNormalization 139862325035088->139862324932496 139862329351952 conv2d_transpose_17: Conv2DTranspose 139862324932496->139862329351952 139862332895056 conv2d_143: Conv2D 139862329351952->139862332895056 139862333305104 conv2d_transpose_18: Conv2DTranspose 139862332895056->139862333305104 139862329409040 conv2d_144: Conv2D 139862333305104->139862329409040 139862333277712 conv2d_transpose_19: Conv2DTranspose 139862329409040->139862333277712 139862333579216 conv2d_145: Conv2D 139862333277712->139862333579216 139862333579216->139862332643408

And what did I get?

Sepia. Not wonderful. But better than the previous hideous mess!

Let’s Make It U-Net

Okay, so maybe Zhang et. al had a point, and I needed to include that probability distribution and the bins. But…looking at my architecture again, I had another idea: U-Net.

U-Net is an architecture that was designed for segmenting medical images, but has proved to be incredibly strong in Kaggle competitions on all sorts of other problems.

The innovation of the U-Net architecture is that it passes information from the higher-level parts of the network at the start across to the scaling-up side on the right, so structure and other information found in the initial higher levels can be used alongside information that’s passed up through the scaling-up blocks. Yay more information!

My existing architecture was basically a U-Net without the left-to-right arrows…so I thought ‘why not add them in and see what breaks?’.

I added a simple line from the first block of scaling-down filters to the last block of the scaling-up filters just to see if I’d get any benefit. And…finally, I was getting somewhere. Here’s the current architecture - the final Lambda layer is just a multiplication to bring the values of the two new channels into the CIELAB colour space for a and b:

G 139860581518608 input_8: InputLayer 139860581518736 conv2d_113: Conv2D 139860581518608->139860581518736 139860504027600 concatenate_9: Concatenate 139860581518608->139860504027600 139860581519248 conv2d_114: Conv2D 139860581518736->139860581519248 139860581125840 batch_normalization_29: BatchNormalization 139860581519248->139860581125840 139860581544016 conv2d_115: Conv2D 139860581125840->139860581544016 139860505267600 concatenate_8: Concatenate 139860581125840->139860505267600 139860579585104 conv2d_116: Conv2D 139860581544016->139860579585104 139860580130576 batch_normalization_30: BatchNormalization 139860579585104->139860580130576 139860579713488 conv2d_117: Conv2D 139860580130576->139860579713488 139860579252048 conv2d_118: Conv2D 139860579713488->139860579252048 139860578430224 conv2d_119: Conv2D 139860579252048->139860578430224 139860577930128 batch_normalization_31: BatchNormalization 139860578430224->139860577930128 139860576929616 conv2d_120: Conv2D 139860577930128->139860576929616 139860577101200 conv2d_121: Conv2D 139860576929616->139860577101200 139860577054480 conv2d_122: Conv2D 139860577101200->139860577054480 139860509186192 batch_normalization_32: BatchNormalization 139860577054480->139860509186192 139860508575056 scale_up_0: Conv2D 139860509186192->139860508575056 139860507939344 up_sampling2d_22: UpSampling2D 139860508575056->139860507939344 139860508112016 conv2d_123: Conv2D 139860507939344->139860508112016 139860506915664 conv2d_124: Conv2D 139860508112016->139860506915664 139860506456208 scale_up_1: Conv2D 139860506915664->139860506456208 139860505934864 up_sampling2d_23: UpSampling2D 139860506456208->139860505934864 139860506054352 conv2d_125: Conv2D 139860505934864->139860506054352 139860505656464 conv2d_126: Conv2D 139860506054352->139860505656464 139860505656464->139860505267600 139860505008080 scale_up_2: Conv2D 139860505267600->139860505008080 139860505450320 up_sampling2d_24: UpSampling2D 139860505008080->139860505450320 139860504819088 conv2d_127: Conv2D 139860505450320->139860504819088 139860504944912 conv2d_128: Conv2D 139860504819088->139860504944912 139860504216016 lambda_7: Lambda 139860504944912->139860504216016 139860504216016->139860504027600

(it turns out that Zhang and his team released a new paper in May that also includes U-Net-like additions, so we’re thinking on the same lines at least!)

You Have Been Watching

Here’s a clip of the original b/w of ‘The Colour Problem’ side-by-side with my colourized version:

I’m not going to claim that it’s perfect. Or even wonderful. But: my partial U-Net was trained only on 5000 stills and only for 10 epochs (taking just over an hour). That it produced something akin to a fourth-generation VHS tape with no further effort on my part seems amazing.

End of Part One

Obviously, the next step is to train the net on the full dataset. This is going to require some rejigging of the training and test data, as the full 650,000 image dataset can’t fit in memory. I’ll probably be turning to bcolz for that. At that point I’ll also likely throw the code up on GitHub (after some tidying up). I’m also moving away from Amazon to a dedicated machine with a 1080Ti card, which should speed up training somewhat. I’ll probably also take a look to see if the additions in other colourization networks provide any benefit, for example adding other nets alongside the U-Net to provide local and global hints for colour. So stay tuned for part 2!


  1. spoilers: yes, it would. [return]

A Citizen With 4 Fewer Teeth

I no longer have a green card! Or anything that identifies me as a valid citizen rather than a large, watermarked piece of paper. Which is likely worse…but better, once I get the new passport sorted out. Anyhow, the citizenship ceremony was fairly straightforward, except that our current President still hasn’t got around to recording a welcome message for new citizens, and a complete absence of him from all printed material. Not that most of us minded when we realized.

After a weekend of boardgames, Doctor Who, buying 8 cases of Diet Coke, and a longer-than-expected-dinner-because-reasons, it was Monday and time to celebrate being an American in style: getting my wisdom teeth removed. I have always been hesitant to have them out, due to one of them being apparently close to one of my jaw nerves. But the dentist here seemed very keen on yanking them out, and well, I was getting tired of the infections they kept bringing. I was informed that I have the smallest mouth she’s ever seen…but I was also one of the nicest patients she’s had. So swings and roundabouts there (everybody seems to agree on the small mouth thing…BUT NOBODY EVER SAID THAT TO ME BEFORE! But fine. Sure.).

Thankfully, it was fairly quick and easy to yank them out, but as everybody thought it would be a bad idea for me to be alone afterwards, Tammy drove me all the way back to Kentucky for the week. I spent the 8-hour car ride apologizing every five minutes, so I’m grateful that she didn’t throw me out in West Virginia.

Aside from running a fever on Wednesday and making the well-intentioned mistake of eating pizza yesterday, it hasn’t been too bad. Some pain, but not unbearable, not too much swelling, and the nerve ended up not being a problem. Hurrah! Though I am looking forward to being able to drink Diet Coke and tea again next week.

In the meantime, I am hopped up on ibuprofen, antibiotics, and percocet, whilst doing work and dying over and over on Zelda1. Tomorrow, mostly recovered, I fly home to Durham. Home for now, anyhow…


  1. Buried lede - I have a Switch! Not with the colour of joycons that I wanted, but after looking for…two months for the thing, I decided to get what was available. I am so bad at Zelda, but slightly better at Mario Kart 8. [return]

Brian Cant

RIP.

As a point of comparison to American readers, I’d say that Cant was something akin to Mr. Rogers in terms of importance to younger viewers. But whereas, to this foreigner, Rogers comes across as the classic American staid gentleman next door, Cant and the Playaway/Play School set gave the impression that they spent their evenings debating Marxist tracts in their omnisexual polyamorous commune.1 They were firmly pitched in the ‘now’…even if that ‘now’ seems a lost time and place for us, particularly in the light of recent events.

The other thing about Cant is that he was always there. We grew up with him on Playaway and Play School, but even after going to school, you’d see him somewhere when you were off sick and watching the schools programmes. Or when he was doing a rotation on Jackanory. Or during the Summer when repeats of Camberwick Green or Trumpton would fill in time on But First This… And then even later at university in the late 90s, where he did The Organ Gang shorts for This Morning With Richard Not Judy. A reassuring twinkling smile when you’re drinking a lemsip…with a hangover on a Sunday morning at the age of 28.2

You can’t claim that he was one of the Pythons, or up there with Spike Milligan, but he had that sparkle of safe anarchy that us British and children in general love:

You could never describe Cant as cool. And yet, a generation of us watched him whilst we were small and knew that’s what we wanted to be when we grew up. Even if we didn’t realize it until much later.


  1. Obviously, no harm to Fred Rogers, who was a legend on his own terms. [return]
  2. Aaahhhhh [return]

One More Day

My family got an extra night in the US after their plane failed to reboot properly. As of right now, they’re delayed another two hours on their second attempt. At this rate, they’ll be here for my citizenship ceremony on Friday. But hopefully they’ll get underway this evening. After all, the cats back home are getting hungry without their supply of treats…

The Absolute Boy

22:00 BST. When the Exit Poll fell.

Although I didn’t make it explicit last week for fear of jinxing it, the YouGov/Survation polls of last week didn’t just make me hope. Given the completely inept way the Tories ran their campaign, the other polls just seemed wrong - surely we wouldn’t give somebody a 100+ seat majority when they spent six weeks seemingly hiding from the press?

We did not.

And we laughed and laughed and laughed. The Tories achieving an amazing Pyrrhic victory, managing to lose a 25-point lead to a man that just two months ago looked like he was taking Labour to the point of destruction.

But they were wrong.

Along with Macron’s En Marche giving people a trouncing in the French elections, things might be looking up1? Just maybe?

In less globally-important news, my citizenship interview went well, and I will become a US citizen on June 23rd. I will celebrate by having my wisdom teeth taken out on the following Monday. I know how to have a good time, y’know.

And, a good time was had this weekend - a full house with many friends, Tammy and I spending Sunday making cakes, ice creams, other pastry items, and then me abandoning her to cook all the chicken. But: so many people that even the extended table wasn’t enough for everybody. Pools and slip’n’slides as well!


  1. I can, at request, go into lengthy detail why the “Bernie-would-have-won” brigade shouldn’t take this as vindication, but I’ll just leave you here with my précis: Jeremy Corbyn won the Labour leadership twice, the second time with more PLP shenanigans than the DNC committed even in your wildest fever dreams. Come back when you don’t lose by 3 million votes. [return]

And I Just Can't Help Believing, Though Believing Sees Me Cursed

I have started to get invested in the polls (at least YouGov/Survation). This will inevitably lead to a crushing sensation around 17:00 EDT on Thursday when the party that has run the worst GE campaign in living memory gets elected in a landslide.

It’s not the despair, Laura. I can stand the despair. It’s the hope!

To cheer us all up, here’s Gyles Brandreth on his short career as an anti-porn crusader in the 1970s:

(if you’re American, then Gyles is essentially so English that if he didn’t exist, we’d be forced to create him in a lab.)

And that’s it this week. Citizenship Interview and Family Arrival tomorrow…

The Car That Didn't Go

I don’t have too many vivid memories of childhood1, but I do remember one day in primary school. We had been given a project to make a small wheeled vehicle, and we were out testing them on the netball court. It was a sunny day, warm and hot.

My car was a mash of Construx and ice-cream cartons2, while Scott’s was a constructed kit affair with a proper motor and gearing. Mine had a huge power block with D batteries and not a single gear in sight. It had worked fine in testing. But testing was my table back home, not the tarmac of the court.

It started, it spluttered, it threw off the belt I had liberated from one of the many video recorders my Dad had let me disassemble. It sat on the court while Scott’s fancy car zipped away in the distance.

The lesson here is not, surprisingly: learn about gearing, or that the rich kid will almost always win, but that if you’re wearing a jumper on a hot summer’s day and you’re still freezing, you might be ill. And so, defeated, I laid back on the grass and shivered until my Mum came to pick me up.

I did build a computer this weekend. It works. My first build since…2006 or so (and even then, that was just replacing a blown motherboard. This is likely my first totally new tower since before UNC…and let’s not count up those years!). I can’t really do anything fancy with it until I buy the graphics card in a month or two, but it’s coming along.

Success, then…but I am still under a blanket, shivering, having difficulty standing up or sitting down without pain, and oh, yes, almost managing to give myself third-degree burns whilst attempting to carry a Lemsip. Perhaps I shouldn’t be left alone. Maybe I’ve become allergic to Durham! Maybe I’m just sick.

Still, a week tomorrow, my family arrives and I have my citizenship interview. So probably need to get better.


  1. Ask me about Covent Garden, and I’ll do my party piece about how I was abandoned in the middle of London at three years old and left to fend for myself amongst a carny of street performers, armed only with an inflatable hammer but also afflicted with an early adherence to pacifism. My parents may chime in with ‘we were getting ice-cream! You said you didn’t want any! What child doesn’t want ice-cream?’, but I don’t think that alleviates them of guilt, do you, dear reader? [return]
  2. Look, eventually I started liking ice-cream, okay? NOT THAT I DON’T RELIVE THE TRAUMA WITH EVERY SPOON. [return]

Cincinnati Again!

Back in Cincinnati again for the week. And having come here quite a few times in the past year, some things are leaping out at me:

  • I can now reliably spell it, which means auto-correct on my iPhone is having a happier time.
  • It’s a fun little city - bigger than Durham, obviously, but not insanely huge like, say, Chicago or New York.
  • I still haven’t gone on the fancy new trams.
  • You can go outside for more than five minutes and not drown in sweat!
  • OH/KY Mexican restaurants have a surprisingly good showing in tortillas.
  • Jungle Jim’s provides a comprehensive selection of real Cadbury products and even…imports from Tesco.
  • Apparently in Newport, they have the largest bourbon selection in…anywhere?

Anyway, back home to Durham on Saturday…and then…I attempt to build a computer again. Oh dear.