28 January 2015

Patent clauses

Interesting discussion of the license Facebook is now using for their open-sourced code:

For those who aren't aware, it says

1. If facebook sues you, and you counterclaim over patents (whether about software or not), you will lose rights under all these patent grants.

So essentially you can't defend yourself.

This is different than the typical apache style patent grant, which instead would say "if you sue me over patents in apache licensed software x, you lose rights to software x" (IE it's limited to software, and limited to the thing you sued over)

2. It terminates if you challenge the validity of any facebook patent in any way. So no shitty software patent busting!

I think React is a great tool, but I didn't realize this license change happened on October 8th, 2014. Prior to that it was Apache 2.0 licensed (which is my license of choice). After that, React is licensed with BSD plus their special patent document (described above). Bummer.

26 January 2015

Here's an example of how people are trying to improve C++ templating. I loved C++ and wrote code with crazy shit like compile-time polymorphism. At the time I thought it was worth the complexity-- hah! I feel bad for the folks maintaining my old code now. Why would you want to enable such behavior in Go?

25 January 2015

Seems that some folks are having problems using Open ID to post comments here. I've disabled that mode until I can figure it out. In the meantime, please use the other comment form. It'd be nice to hear from you. Sorry for the trouble!

24 January 2015

Sad but true: "Hacker News" isn't news for hackers anymore. The community I enjoyed has moved to Lobsters.

18 January 2015

Experimentally verified: "Why client-side templating is wrong"

This week Peter-Paul Koch ("PPK" of QuirksMode fame) wrote about his issues with AngularJS. Discussing Angular isn't the goal of my post here. It was one aside PPK made that caught me off guard (emphasis mine):

Although templating is the correct solution, doing it in the browser is fundamentally wrong. The cost of application maintenance should not be offloaded onto all their users’s browsers (we’re talking millions of hits per month here) — especially not the mobile ones. This job belongs on the server.

Wow! Turns out I'm not the only one who was surprised by that statement. So he wrote a follow-up that explains "why client-side templating is wrong"; it provides more nuance than the original post:

Populating an HTML page with default data is a server-side job because there is no reason to do it on the client, and every reason for not doing it on the client. It is a needless performance hit that can easily be avoided by a server-side templating system such as JSP or ASP.

To me this could mean one of two things:

  1. Do not use client-side templating at all. Render all data server-side as HTML for the initial page view.
  2. Do not send an AJAX request back to the server after loading the shell of your app (a style that plagues GWT apps like AdWords and Twitter's old UI). Instead, you should provide the initial view data server-side in the first page HTML (i.e., as a JSON blob) and render it with JavaScript immediately after load.

Style #2 is what I see becoming more popular. For example, check out the source of Google Analytics. The first page load is a small HTML shell, JS and CSS resources, and a blob of JSON. Some may say that this style prevents you from getting indexed, but that's old information. Web crawlers now support running JavaScript, meaning you don't need to use 2009-era hacks like /#! anymore.

PPK resolves the ambiguity with this:

Then, when both framework and application are fully initialised, the user starts interacting with the site. By this time all initial costs have been incurred, so of course you use JavaScript here to show new data and change the interface. It would be wasteful not to, after all the work the browser has gone through.

So he definitely means option #1 from above: Never use JavaScript for first page load rendering. Now other detail from his post makes more sense (emphasis mine):

I think it is wasteful to have a framework parse the DOM and figure out which bits of default data to put where while it is also initialising itself and its data bindings. Essentially, the user can do nothing until the browser has gone through all these steps, and performance (and, even more importantly, perceived performance) suffers. The mobile processor may be idle 99% of the time, but populating a new web page with default data takes place exactly in the 1% of the time it is not idle. Better make the process as fast and painless as possible by not doing it in the browser at all.

I'd guess PPK would be okay with how ReactJS can render a template client- or server-side using the same code. With React the first page load can be dumb HTML that's later decorated by JavaScript to come alive for user interaction. So I think PPK's concern isn't about JavaScript templating. The debate is whether JavaScript should be required to run before a page is first rendered (a requirement of style #2 above).

Instead of more words about philosophy and architecture, I decided to understand this with numbers by putting it to the test on real hardware. What follows is the experiment I ran.

Experiment HTML pages

I wrote a small server in Go that generates test pages that I figure are a fair approximation of "templating complexity". The silly example I used is straight out of the HTML5 spec for rendering a table of cats that are up for adoption. The server lets you vary the number of cats in the table from 1 to 10,000+. Here's what the output of the server looks like:

Name Colour Sex Legs
Maggie Brown Mackeral Torbie Unknown 3
Lucky Seal Point with White (Mitted Pattern) Male 4
Minka Silver Mackerel Tabby Female 4
Millie Solid Chocolate Unknown 2
Kitty Blue Classic Tabby Female (neutered) 4

The server can generate this table in two different ways.

The first way the server renders is a classic server-side template. Go templates are pretty fast and the results are served compressed and chunked, so I think this is a reasonable proxy for "the best you can do":

<!DOCTYPE html>
  <meta charset="utf-8">
  <title>Server render</title>
      {{range .}}

The second way the server renders is with the new HTML5 template tag used by frameworks like Polymer and Web Components. The server generates a blob of JSON and does all rendering from that data client-side. The only server-side templating required is injecting the JSON serialized data into the HTML shell:

<!DOCTYPE html>
  <meta charset="utf-8">
  <title>Template tag render</title>
    var data = {{.}};  // The JSON data goes here
      <template id="row">
    var template = document.querySelector('#row');
    for (var i = 0; i < data.length; i += 1) {
      var cat = data[i];
      var clone = template.content.cloneNode(true);
      var cells = clone.querySelectorAll('td');
      cells[0].textContent = cat.name;
      cells[1].textContent = cat.color;
      cells[2].textContent = cat.sex;
      cells[3].textContent = cat.legs;

Notably, neither of these pages has any external resources. I wanted to control for that variable and understand the best you could do theoretically with either approach.

Browser setup

I tested the two test pages in two different browsers.

The first browser is Chrome Desktop version 39.0.2171.99 on Mac OS X version 10.10.1; the machine is somewhat old, a mid-2012 MacBook Air (2GHz Core i7, 8GB RAM, Intel Graphics). The second browser is Chrome Mobile version 39.02171.93 on Android 4.4.4; the device is top of the line, a OnePlus One (Snapdragon 801, 2.5GHz quad-core, 3GB RAM, Adreno 330 578MHz). Granted, this is a high-end mobile device, but it's also one of the cheapest out there — I wouldn't bet against Moore's law (especially with what's happening to Samsung).

On desktop, the page loads were done directly to localhost:8080, eliminating any concerns of network overhead. On mobile, I used port forwarding from the Chrome DevTools to access the same localhost:8080 URLs.

Content size

Here's a chart of the size of the content as a function of the number of cats on the adoption page (note that this log-scale on the X axis and log-scale on the Y axis).

What I see in this graph:

  • The page that uses JSON is smaller than the HTML page
  • But the compressed size only varies by 10-20% at most

Conclusion: Response sizes don't matter if you're using compression.

Server response time

Here's a chart of the server response time as a function of the number of cats on the adoption page (again this is a log-log chart on both axes). This is the time it takes for the Go server to render the template without actually loading it in a browser. I measured this time with curl -w '%{time_total}\n' -H 'Accept-Encoding: gzip' -s -o /dev/null URL.

What I see in this graph:

  • The pages are about the same performance until you get to 250 cats
  • Above 250 cats, the JSON page becomes increasingly faster (5x for 10,000 cats)

Conclusion: The page that uses JSON is faster at higher numbers of cats likely because there's less data to copy to the client. But this latency is small enough and similar enough (JSON vs. HTML) that it can be ignored when comparing overall render time.

First & last paint time

This is the most important thing to measure. Essentially, PPK's argument comes down to the critical rendering path. That is: How long from when the user starts trying to load your webpage in their browser to when they actually see something useful appear on their screen?

Measuring the time from "request to glass" is pretty well documented these days. Ilya Grigorik gave a great talk at Velocity 2013 about optimizing the critical rendering path. Since then the same content has been turned into helpful documentation. Chrome even provides a window.chrome.loadTimes object that will give you all of the timing information (see this post for details).

I added instrumentation to the templates above to console.log two numbers:

  • Time to first paint: How long between the request start and drawn pixels on the screen. Importantly, the page may still be loading (via a chunked response) when this happens.
  • Time to last paint: How long between the request start and the page fully finishing its load and render. This is the time after all resources have loaded, all JavaScript has run, and all client-side templates have inserted data into the DOM.

I ran each timing test 5 times to account for environment variations (like GC pauses and random system interrupts).

Results: Desktop

Here's a chart of time to first paint on Chrome Desktop (log-scale X axis).

Here's a chart of time to last paint on Chrome Desktop (also log-scale X axis).

What I see in these graphs:

  • The first and last paint times for the JSON/JavaScript approach are the same (as expected)
  • The first and last paint times for the server-side approach are different because the browser can render HTML before the whole chunked response has finished loading
  • Up to 1000 cats, there is practically no latency difference on Desktop between client-side rendering with JavaScript and server-side rendering with dumb HTML
  • Above 1000 cats, the server-side rendered approach will do first paint well before the client-side approach (3x faster in the case of 10,000 cats)
  • However, the client-side rendered approach will do last paint well before the server-side approach (2x faster in the case of 10,000 cats)

Conclusion: It depends on what you're doing. If you care about first paint time, server-side rendering wins. If your app needs all of the data on the page before it can do anything, client-side rendering wins.

Results: Mobile

I repeated the same painting tests for the mobile browser.

Here's a chart of time to first paint on Chrome Mobile (log-scale X axis, and note that the Y axis is 0 to 10 seconds instead of 0 to 2 seconds used in the desktop charts above).

Here's a chart of time to last paint on Chrome Mobile (also log-scale X axis and 0 to 10 second Y axis).

What I see in these graphs:

  • Everything is about 5x slower than desktop
  • There's a lot more variance in latency on mobile
  • Performance up to 1000 cats is roughly the same between server-side and client-side rendering (with up to a 30% difference in once case; 15% or less otherwise)
  • Above 1000 cats, the server-side rendered approach has a lower time to first paint (up to 5x faster) just like desktop
  • The client-side rendered approach will do last paint well before the server-side approach (up to 2x faster) just like desktop

Conclusion: Almost the same comparative differences as client- and server-side rendering on desktop.


Is PPK correct in saying that "client-side templating is wrong"? Yes and no. My test pages use the number of cats in the adoption table as a proxy for the complexity of the page. Below 1,000 cats worth of complexity, the client- and server-side rendering approaches have essentially the same time to first paint on both desktop and mobile. Above 1,000 cats worth of complexity, server-side rendering will do first paint faster than client-side rendering. But the client-side rendering approach will always win for last paint time above 1,000 cats.

That leads to two important questions:

1. How many cats worth of complexity are you rendering in your pages?

For most of the things I've built, I usually don't render a huge amount of data on a page (besides images). 1,000 cats worth of complexity in the data model (which corresponds to ~75KB of JSON data) seems like quite a bit of room to work with. For example, the Google Analytics page I linked to above only has ~30KB of JSON data embedded in it for initial load (in my case).

After the first load, modern tools make it trivial to dynamically load content beneath the fold. You can even do this in a way that doesn't render elements that aren't visible, significantly improving performance. And I'd say optimizing the size and shape of the first render data payload is straightforward.

Practically speaking, this all means that if your first load is less than 1,000 cats of complexity, client-side rendering is just as fast as server-side rendering on desktop and mobile. To me, the benefits of client-side rendering (e.g., UX, development speed) far outweigh the downsides. It's good to know I'm not making this tradeoff blindly anymore.

2. Do you care about first paint time or last paint time?

With Google's guide on performance, Facebook's BigPipe system, and companies like CloudFlare and Fastly making a splash, it's clear that some people really care a lot about first paint performance. Should you?

I'd say it really depends on what you're doing. If your business model relies on showing advertisements to end-users as soon as possible, time to first paint may make all the difference between a successful business and a failed one. If your website is more like an app (like MailChimp or Basecamp), the most important thing is probably time to last paint: how soon the app will become interactive after first load.

The take-away here is to choose the right architecture for your problem. Most of what I build these days is more app-like. For me, time to first paint doesn't matter as much as time to last paint. So again, I'll stick with the benefits of client-side rendering and last-paint performance over server-side rendering and first-paint performance.


My conclusion from all this: I hope never to render anything server-side ever again. I feel more comfortable in making that choice than ever thanks to all this data. I see rare occasions when server-side rendering could make sense for performance, but I don't expect to encounter many of those situations in the future.

(PS: You can find all of the raw data in my spreadsheet here and all of the test code here)

(PPS: If you're particularly inspired by this post to adopt a cat, check out the ASPCA)

16 January 2015

M.C. Escher and Monument Valley

I love the visual style of M.C. Escher.

It's probably old news but I finally learned about the game Monument Valley this week. It's beautiful and clearly an homage to Escher's art. The puzzle game requires traversing a map of impossible constructions and geometry.

Here's a good example of what you have to twist your mind around (you have to go up the stairs, then across the bridge, then down the ladder, then rotate the platform, then go back up in an impossible direction):

The original game is absolutely gorgeous and worth it just for the visuals and audio. The new levels ($2 extra) are more difficult than the first set and even more reminiscent of Escher's wackier art. More like this please!

13 January 2015

"Territory Annexed" on The Awl is a great article about how media and silos are evolving on the internet.


Insightful post by legendary (Python) programmer Ian Bicking: "Being A Manager Is Lonely".

Even if you don't manage people, this is helpful in that it illustrates what your boss is probably worried about and what you have to look forward to if management is thrust upon you.

11 January 2015

$ make clean

Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo.


09 January 2015

Attempts at adding Go's select to Python

I found a great explanation of how Go channels are implemented under the covers (January 2014). It compliments a similar implementation from Plan 9 and some related discussion on go-nuts.

I also found an implementation of channels for Python called goless that relies on Stackless or gevent. The project's big open issue is that it's missing support for asyncio. That should be possible given asyncio has built-in support for synchronization primitives. But even with that, I'm not enthusiastic about how goless implements the select statement from Go:

chan = goless.chan()
cases = [goless.rcase(chan), goless.scase(chan, 1), goless.dcase()]
chosen, value = goless.select(cases)
if chosen is cases[0]:
    print('Received %s' % value)
elif chosen is cases[1]:
    assert value is None
    assert chosen is cases[2]

An earlier attempt at select for Stackless has the same issue. Select is just really ugly. Send and receive on a channel aren't too pretty either. I wonder if there's a better way?

08 January 2015

Interesting summary on LWN of Guido's type hinting proposal for Python. The comments are the best part.

07 January 2015

Update from Jason Scott about the Internet Archive's cool DOS games treasure trove.


Apache Samza, LinkedIn’s Framework for Stream Processing. I still don't know what tool I'd use for this on an open source stack.


Useful ECMAScript 6 compatibility table. Looks like 6to5 and Traceur are almost ready for prime time.

06 January 2015

What would Von Neumann say about data binding?

Interesting quote and book summary on High Scalability about building the ENIAC:

Von Neumann had one piece of advice for us: not to originate anything... One of the reasons our group was successful, and got a big jump on others, was that we set up certain limited objectives, namely that we would not produce any new elementary components... We would try and use the ones which were available for standard communications purposes. We chose vacuum tubes which were in mass production, and very common types, so that we could hope to get reliable components, and not have to go into component research.

This makes me wonder about the evolution of tools we're seeing around data binding in building websites these days: React, Angular, Polymer. The outcomes, APIs, and benefits of these projects are very similar. But somehow using these tools feels like you're doing a type of UX development research — they're unproven.

The conservative approach, as Von Neumann suggests, would be to avoid these tools entirely. However, I believe these next generation UX tools are actually a secret weapon akin to Paul Graham's description of using LISP instead of a median programming language:

We wrote our software in a weird AI language, with a bizarre syntax full of parentheses. For years it had annoyed me to hear Lisp described that way. But now it worked to our advantage. In business, there is nothing more valuable than a technical advantage your competitors don't understand. In business, as in war, surprise is worth as much as force.

Obviously, the median language has enormous momentum. I'm not proposing that you can fight this powerful force. What I'm proposing is exactly the opposite: that, like a practitioner of Aikido, you can use it against your opponents.

If you work for a big company, this may not be easy. You will have a hard time convincing the pointy-haired boss to let you build things in Lisp, when he has just read in the paper that some other language is poised, like Ada was twenty years ago, to take over the world. But if you work for a startup that doesn't have pointy-haired bosses yet, you can, like we did, turn the Blub paradox to your advantage: you can use technology that your competitors, glued immovably to the median language, will never be able to match.

And so we're trying to build some things using Polymer. I hate doing anything trendy but I must admit that it feels extremely productive to use. Given that the <template> tag is built into many browsers now, I think this is more stable than it appears. I'll let you know how it goes.


Effective Python's first public review (I figure they read the Rough Cut).


Cool explanation of some Lua/LISP tools for an RPG.


Want! The XL741 Discrete Op-Amp Kit: "Build a working transistor-scale replica of the classic and ubiquitous analog workhorse."


Today so far.

05 January 2015

Two different approaches to HTTP request contexts in Go servers. These rely on passing another parameter to every function involved in handling a request. That kind of repetition makes me hungry for a threadlocal that works with goroutines. Such an idea is taboo in Go, but apparently it's possible.


Though it's only been a week, I've enjoyed the blogs of my Twitter friends much more than their Tweets. Go figure~

04 January 2015

Here's a first taste of what's coming in Effective Python. The full text of Item 17: Be Defensive When Iterating Over Arguments. If you haven't hit this gotcha already, it's only a matter of time. I describe the symptoms and a reasonable solution.

29 December 2014

I built my Twitter friends to RSS feeds tool using App Engine's new Managed VMs feature and Go. It's scaling great!


Generic programming in Go using "go generate"

Go 1.4 was released on December 10th and brings us the new go generate command. The motivation is to provide an explicit preprocessing step in the golang toolchain for things like yacc and protocol buffers. But interestingly, the design doc also hints at other use-cases like "macros: generating customized implementations given generalized packages, such as sort.Ints from ints".

That sounds a lot like generics to me! As you've probably heard, generics are a glaring omission from Go and the subject of a lot of debate. When I saw the word "macro" in the design doc I was reminded of "C with Classes", Bjarne Stroustrup's first attempt at shoehorning object-oriented programming into C. Here's what he recalls from the time:

In October of 1979 I had a pre-processor, called Cpre, that added Simula-like classes to C running and in March of 1980 this pre-processor had been refined to the point where it supported one real project and several experiments. My records show the pre-processor in use on 16 systems by then. The first key C++ library, the task system supporting a co-routine style of programming, was crucial to the usefulness of "C with Classes," as the language accepted by the pre-processor was called, in these projects.

Perhaps go generate is the first version of "Go with Generics" that we'll look back at nostalgically while we're all writing Go++. But for now I'd like to take generate for a spin and see how it works.

Why generics?

To make sure we're on the same page, I'd like to answer the question: why do I want to do this? For me the first painful moment using Go was when I tried to join together some strings with a newline character. The problem was that I'm far too reliant on writing Python code like this:

#!/usr/bin/env python

from collections import namedtuple

Person = namedtuple('Person', ['first_name', 'last_name', 'hair_color'])

people = [
    Person('Sideshow', 'Bob', 'red'),
    Person('Homer', 'Simpson', 'n/a'),
    Person('Lisa', 'Simpson', 'blonde'),
    Person('Marge', 'Simpson', 'blue'),
    Person('Mr', 'Burns', 'gray'),

joined = '\n'.join(repr(x) for x in people)

print 'My favorite Simpsons Characters:\n%s' % joined

Note how the '\n'.join(repr(x) for x in people) does all the heavy lifting here. It converts the object to a string representation using the repr function. The join method consumes all of those converted inputs and returns the combined string. The same approach works for any type you throw at it. The output is unsurprising:

My favorite Simpsons Characters:
Person(first_name='Sideshow', last_name='Bob', hair_color='red')
Person(first_name='Homer', last_name='Simpson', hair_color='n/a')
Person(first_name='Lisa', last_name='Simpson', hair_color='blonde')
Person(first_name='Marge', last_name='Simpson', hair_color='blue')
Person(first_name='Mr', last_name='Burns', hair_color='gray')

Here's an attempt at accomplishing the same thing generically in Go. The idea here is that I'll implement the method required to make my struct satisfy the fmt.Stringer interface. Then I'll use a type conversion to invoke the generic method With on a []fmt.Stringer array. This should work because my struct Person satisfies the interface, right?

package main

import (

type Join []fmt.Stringer

func (j Join) With(sep string) string {
 stred := make([]string, 0, len(j))
 for _, s := range j {
  stred = append(stred, s.String())
 return strings.Join(stred, sep)

type Person struct {
 FirstName string
 LastName  string
 HairColor string

func (s *Person) String() string {
 return fmt.Sprintf("%#v", s)

func main() {
 people := []Person{
  Person{"Sideshow", "Bob", "red"},
  Person{"Homer", "Simpson", "n/a"},
  Person{"Lisa", "Simpson", "blonde"},
  Person{"Marge", "Simpson", "blue"},
  Person{"Mr", "Burns", "gray"},
 fmt.Printf("My favorite Simpsons Characters:%s\n", Join(people).With("\n"))

Unfortunately, this fails with a cryptic message:

./bad_example.go:40: cannot convert people (type []Person) to type Join

Perhaps the type conversion Join(people) is no good. What if instead I just accept an array of fmt.Stringer interfaces? My Person struct implements String so it's assignable to a fmt.Stringer. It should work. Here's the revised section of the program:

type Joinable []fmt.Stringer

func Join(in []fmt.Stringer) Joinable {
 out := make(Joinable, 0, len(in))
 for _, x := range in {
  out = append(out, x)
 return out

func (j Joinable) With(sep string) string {
 stred := make([]string, 0, len(j))
 for _, s := range j {
  stred = append(stred, s.String())
 return strings.Join(stred, sep)

This also fails, this time a bit more clearly:

./bad_example2.go:51: cannot use people (type []Person) as type []fmt.Stringer in argument to Join

The problem here is the difference between an array of structs and an array of interfaces. Russ Cox explains all the details here. The gist is that interface references in memory are a pair of pointers. The first pointer is to the type of the interface (like fmt.Stringer). The second pointer is to the underlying data (like Person). A []Person array is contiguous bytes in memory of Person structs. A []fmt.Stringer array is contiguous bytes in memory of interface reference pairs. The representations aren't the same, so you can't convert in a typesafe way.

So we're stuck. The only way out is to use reflection, which will slow everything down. Luckily, in Go 1.4 we now have another built-in option: go generate.

Writing a generate tool

The Go team helpfully provided an example tool for generating Stringer implementations using go generate. The code for the tool is here and it's pretty gnarly. It walks the abstract syntax tree (AST) of the source code and determines the right code to output. It's quite an odd form of generic programming.

Based on this example I tried to implement my own generate tool. My goal was to provide the join functionality I sorely missed from Python. I'd consider it success if the following program would execute simply by running go generate followed by go run *.go.

package main

//go:generate joiner $GOFILE

import (

// @joiner
type Person struct {
 FirstName string
 LastName  string
 HairColor string

func main() {
 people := []Person{
  Person{"Sideshow", "Bob", "red"},
  Person{"Homer", "Simpson", "n/a"},
  Person{"Lisa", "Simpson", "blonde"},
  Person{"Marge", "Simpson", "blue"},
  Person{"Mr", "Burns", "gray"},
 fmt.Printf("My favorite Simpsons Characters:\n%s\n", JoinPerson(people).With("\n"))

I also had to do some AST walking (that's the core of the tool). I rely on the comment // @joiner to indicate which types I want to make joinable. Yes, this is a gross overloading of comments. Perhaps something like tags for type declarations would be better if the language supported it (similar to "use asm"). Go's built-in templating libraries made it easy to render the generated functions.

The full code for my tool is available on GitHub. You can install it on your system with go install https://github.com/bslatkin/joiner. Once you do that, you can run go generate to cause Go to run the tool and output a corresponding main_joiner.go file that looks like this:

// generated by joiner -- DO NOT EDIT
package main

import (

func (t Person) String() string {
 return fmt.Sprintf("%#v", t)

type JoinPerson []Person

func (j JoinPerson) With(sep string) string {
 all := make([]string, 0, len(j))
 for _, s := range j {
  all = append(all, s.String())
 return strings.Join(all, sep)

Remarkably, this works for my original example above with no modifications. Here's the output from running go run *.go:

My favorite Simpsons Characters:
main.Person{FirstName:"Sideshow", LastName:"Bob", HairColor:"red"}
main.Person{FirstName:"Homer", LastName:"Simpson", HairColor:"n/a"}
main.Person{FirstName:"Lisa", LastName:"Simpson", HairColor:"blonde"}
main.Person{FirstName:"Marge", LastName:"Simpson", HairColor:"blue"}
main.Person{FirstName:"Mr", LastName:"Burns", HairColor:"gray"}


Does go generate make generics for Go easier? The answer is yes. It's now possible to write generic behavior in a way that easily integrates with the standard golang toolchain. I expect existing Go generics tools like gen and genny to move over to this standard approach.

However, that helps most in consuming generic code libraries and using them in your programs. Writing new generic code is still an exceptionally laborious process. Having to walk the AST just to write a generic function is insane. But you can imagine a standard generate tool that helps you write other generate tools. That's the piece we're missing to make generic programming in Go a reality. Now with go generate in the wild, I look forward to renewed interest in projects like gotgo, gonerics, and gotemplate to make this easy!

28 December 2014

Spew is a super useful Go library that "implements a deep pretty printer for Go data structures to aid in debugging".


Tweeps 2 OPML — Find your Twitter contacts' personal RSS feeds

I wrote a utility to get the RSS feeds of all of my Twitter friends. Use it for your contacts here. Let me know if it breaks!

The background is that I realized I've been following a lot of people on Twitter but not necessarily following their blogs. That's sad, especially considering that I'm still using RSS a lot these days. This tool solves the problem in just a few clicks. The best part is that most feed readers are fine with importing duplicate feeds, so you can periodically run this thing and reimport the list of feeds to stay up-to-date.

The one thing to watch out for is people who link to non-personal feeds like the wired.com homepage. Once you import all the feeds into your feed reader it's worth looking through the "Twitter friends" directory and removing those feeds that look noisy or suspicious.

All the code is on GitHub. Patches welcome. It'd probably be useful to have the same tool for Google+, Facebook, LinkedIn, etc.


How to import an OPML file of RSS feeds into NewsBlur

I couldn't find this in an FAQ anywhere, so here it is.

1. Go home by clicking on the home icon (top left):

2. Click on your account settings gear (top right) to see this menu:

3. Click on "Import or upload sites" and see the button to pick an OPML file:

4. Upload the file and wait a while. You'll probably need to reload the page. Importing hundreds of new feeds can take 5+ minutes sometimes.

Similar instructions for Feedly, The Old Reader, and Digg Reader.


MIT Science Reporter

Watched another of the MIT Science Reporter videos from the 1960s. Bizarre to see how people talked about technology 50 years ago. Here's one about the Apollo Computer from 1965. That same year Margaret Hamilton "became the director and supervisor of software programming".

27 December 2014

Current status: Trying to generate an OPML file.
(yes, I realize it's almost 2015)

25 December 2014

Old one (1991) I didn't know: "Fowler–Noll–Vo hash function"

18 December 2014

Our approach to manufacturing is as quaint as punchcards

Recently, I had the privilege to hang out with some septuagenarian programmers. I was showing off the bicycle shifter I've been working on. I complained to the veterans that I was shocked by the amount of time required to bring a physical design to implementation. It took two weeks to come up with a CAD model; a month and 30 failed attempts to make a usable 3D print; 4 months to manufacture a part out of aluminum.

"That's what programming was like back in the day" was their reply. Really? "You'd turn in your punch cards and hope to get the output a week later — sooner if you were lucky". Can you imagine writing a program like that? I had never considered the analogy to physical design.

It's unimaginable how anyone prototyped physical objects before 3D printing. Debugging a program with one execution attempt per week is similarly inconceivable. We're spoiled these days. We grew up reliant on read-eval-print loops (REPLs), interactive debuggers, quick compilers, and instant gratification. Though we've progressed beyond punchcards, many people think our tools still aren't fast enough. Our generation is obsessed with speed and convenience (perhaps for good reason).

The question is: What will bring physical manufacturing up to parity with software? Earlier this year I would have said it's 3D printing. But 3D printing is slow. Even though laser sintering can produce precision parts like rocket engines, it doesn't scale. SpaceX isn't printing 1,000,000 engines, they're printing 100.

To build cars, cell phones, and soda cans you need to produce high volumes quickly. Waiting for a 3D printer to finish a job isn't cost effective for most terrestrial products. Instead, you need to design for manufacturing with mills in factories where the machine time is measured in seconds, not hours. But today designing for factories is too way too difficult. Our approach to manufacturing is as quaint as running assembly code on punchcards once a week.

Though there are high-level tools to turn CAD models into milling toolpaths, that's only a small portion of the problem. What we need is a way to click a button and launch a manufacturing process: a generalized, automated workflow from CAD to produced parts. Maybe affordable milling tools will help us get there. Maybe cheap, inflatable robots will enable scale.

As much as I loved the Diamond Age, I don't think 3D printers will make it possible. But I think it's a goal worth pursuing. It would be wonderful to tell our grandchildren the same line: "That's what manufacturing was like back in the day". Who knows what they'll be creating then.


You can read Effective Python today

The rough cut of Effective Python is now available on Safari Books Online. This is an early preview of the full content of the book before editing has been completed. I've also published the table of contents on the website and all the example source code in the GitHub project.

17 December 2014

Fun time tonight at the Homebrew #indieweb club at Mozilla SF with Tantek, Ryan, Katie, Pius, Nick, Ari, Jon, Kyle, and more! Thanks!

16 December 2014

It feels surreal to see proofs of the final layout of Effective Python.


Compare the shunting-yard algorithm implemented in Rust vs. Samizdat. Samizdat has built-in syntax for defining PEG parsers.

15 December 2014

danfuzz released the source to Samizdat today! "Samizdat is a high-level programming language somewhere down the family lineage from all of ALGOL, Lisp, and Smalltalk." I'm excited to try it out.

12 December 2014

10 December 2014

Excited to be giving a talk at PyCon 2015: "How to Be More Effective with Functions". Plus, Effective Python will be published by then! Hope to see you in Montréal.

09 December 2014

brew install npm
npm install bower
bower install

08 December 2014

I signed up for LinkedIn on Saturday to test running ads (I'm curious if they work better than AdWords or Twitter). Even though I didn't give it access to my address book, I think it emailed everyone I've ever known. Embarrassing! But I've heard that's par for the course with them. Anyways, if you got an email with my face in it, I'm sorry about it.


tracemalloc alone is a good enough reason to prefer Python 3.

06 December 2014

Another I missed: "7 Common Mistakes In Go And When To Avoid Them". It's a good list.


Effective Python is now available for preorder

Exciting! You can get it on Amazon by using this link.

Kindle, Safari, and eBook editions will be available once we're closer to the publishing date.

04 December 2014

Cool overview that I missed: "Go 1.4+ Garbage Collection (GC) Plan and Roadmap"

01 December 2014

Docker vs. Rocket vs. Vagrant — Still anyone's game

In case you missed it, today CoreOS introduced its own container format called Rocket (to compete with Docker). It's hard to decide who actually knows what they're talking about here. The CoreOS folks have proven they have good judgement when it comes to reimplementing existing infrastructure. Docker's CEO posted a rebuttal that's encouraging, but it rings hollow to me.

Separately, Vagrant is barely mentioned these days. The lines between all three options (Docker, Rocket, Vagrant) blur for most people. For example, check out this Stack Overflow thread from about a year ago where the creator of Docker and the creator of Vagrant go head-to-head on the question, "Should I use Vagrant or Docker?" They're both a bit confused, but Docker has captured mindshare since then.

My take on this is things are moving so fast nobody knows where it'll end up. Docker was first demoed at PyCon in March 2013 — so recently! Yet when you see both Amazon and Google announcing Docker container services within weeks of each other, it's easy to assume that Docker has won. The beauty of open source and high-level APIs is that it's still anyone's game. Confident broadsides like Rocket cast doubt on our assumptions. I applaud CoreOS for keeping everyone alert.

Meanwhile, Solomon Hykes (Docker's creator and CTO) still knows what's up. Here's his Tweet stack on the matter. His words ring true to me. Why isn't Docker responding on their blog with strong technical leadership like that? After Twitter and others destroyed their ecosystems, we're all a bit skeptical of rhetoric about "open" these days.
© 2009-2015 Brett Slatkin