This week is my 10 year anniversary of working as a professional software engineer. The advice I've followed is "always be the worst player in the band". But over time the meaning of that statement has changed for me. I've realized that there's a lot more to making music than playing the notes.
Speaking of that Jazz book: I asked a musician I know for some advice on how to learn. He said, "I know just the book for you". He showed me "Jazz Theory" and I asked, "This looks great, but are there any other books I should read?" The musician's answer was, "No. If you wanted to learn about Jesus, you'd read the Bible. If you want to learn about Jazz, this is the book."
Why isn't there such an obvious go-to book in the realm of programming? I wonder who would disagree with my musician friend, why, and what book they'd recommend instead. Perhaps it's just the nature of learning: While you're inexperienced, people have amazing, definitive advice; once you know, everything is murky.
If you haven't heard of Kerbal Space Program, the gist of the game is this: You play the role of NASA or SpaceX to design, finance, build, launch, and control rockets, spaceships, robots, and astronauts that explore the solar system. It sounds like a great idea that ends up being boring in practice, but they somehow figured it out — Kerbal is extremely fun.
I've been watching the game evolve since it debuted. I'm really cautious about playing games because I get obsessed and they take up all my free time (to the detriment of other things, like open source projects and my social life). But this summer I was looking for a new way to chill out. I decided it was finally the right moment to start playing.
Perfectly matching a satellite's orbital parameters
It scratches a similar itch to Sim City and Civilization. But it also feels like Minecraft in how creative and arbitrary the game is. I haven't even begun to explore the tremendous Kerbal mod community that exists.
Here are some screenshots from my most recent accomplishment in the game: going to Jupiter ("Jool"), landing on two of its moons ("Bop" and "Pol"), and making it back to Earth ("Kerbin").
I had to fly a big spaceship that's powered by a nuclear rocket. Look how stoked those Kerbals are, in the lower right of the screen, to be zooming off for a 10 year mission to the edge of the solar system:
Here I'm using the guidance computer to execute my burn. If you haven't played the game this will look insane. For someone who plays the game this is actually a trivial example:
And finally, here's what it looks like when the astronauts ("Kerbonauts") make it back home safely. Succeeding in getting them home is one of the best parts. (This was a night landing because I ran out of fuel and had to rely on aerobraking to slow down).
I've barely scratched the surface of this game. I can't recommend it enough. But don't blame me if it ruins your life, too.
This article about SpaceX and their goal of colonizing Mars is the most inspiring thing I've read all year. It's long but totally worth getting through. It includes tons of great links, such as these travel posters:
I saw some giant sequoias a few weeks ago. There aren't many of these 2000+ year old trees left. Growing more seems extremely difficult given all of the war, fire, and suffering that occurs on Earth during such a time period. With all our technology and sophistication we can send a probe to Pluto, but it seems impossible to grow a tree into old age. It's so simple, yet so difficult.
Time is not something we can control. Or is it? You can imagine launching baby trees into space and shooting them off towards the speed of light. Time would elapse faster for the tree than on earth due to special relativity. After a large orbit, the trees would return and be thousands of years older than when they left. Land them back on Earth, put them in the ground: problem solved.
I’m happy to report that we at Mozilla have started working with Chromium, Edge and WebKit engineers on creating a new standard, WebAssembly, that defines a portable, size- and load-time-efficient format and execution model specifically designed to serve as a compilation target for the Web.
Python 2.7.11 will be 15-20% faster by using computed gotos instead of a switch statement, thanks to a patch from Intel. Edit: To be clear, it's the bytecode interpreter's code that's faster; overall benchmarks are here.
I went to the Gopherfest event tonight. There's a lot to be excited about in Go 1.5. The freeze is now. Release is due in August. Be sure to check out Rob Pike's talk about "Go in Go" and Andrew Gerrand's talk about the "State of Go" (May 2015 edition).
Here's a cool post that demonstrates the differences between interpreters, compilers, and just-in-time compilers. My first exposure to this was the dynamic recompilation that some Nintendo emulators would do for speed. At the time I didn’t realize that it’s pretty much the same thing as JITing.
One advantage Python has is that this same faculty that you are using to create a program is also used to test it and debug it. When you hit a confusing error, you are learning how the runtime is executing your code based on its state, which feels broadly useful (after all, you were trying to imagine what it would do when you wrote the code).
By contrast, writing Scala, you have to have a grasp on how two different systems work. You still have a runtime (the JVM) which is allocating memory, calling methods, doing I/O, and possibly throwing exceptions, just like Python. But you also have the compiler, which is creating (and inferring) types, checking your invariants, and doing a whole host of other things. There's no good way to peek inside that process and see what it is doing. Most people probably never develop great intuitions around how typing works, how complex types are encoded and used by the compiler, etc.
This logic also explains why Go is so easy to write and probably accounts for its astounding rate of adoption. It's so much easier to master both the running Go language and its type system compared to other statically typed languages.
Erik's answer also puts last year's popular posts, "Why Go Is Not Good" and "Go's Type System Is An Embarrassment", in perspective. The authors of those posts don't appreciate how many people have trouble understanding complex type systems. A lack of understanding prevents programmers from using such languages effectively, which reduces overall adoption.
I submitted two talks for EuroPython this year (July 20-26 in Bilbao, Spain). They've just opened voting to existing ticket holders. If you've already bought a ticket, I'd really appreciate your vote on one or both of my talks! (You can also buy a ticket here). You should also take the time to vote for the other submissions you like. There are a lot to choose from!
These will be in the same style as the talk I gave at PyCon 2015 this year, but the content will be completely different. Hopefully they'll end up accepting whichever of my talks that has the highest rating. Thanks in advance for your support!
Our product generates histograms from individual data points. We're working on improving our analysis system so it can answer more questions using the same raw input data. We do this by constructing an intermediate data format that makes computations fast. This lets us do aggregations on the fly instead of in batch, which also enables advanced use-cases like interactive parameter tuning and real-time updates.
Each time we'll add a new analysis feature, we'll also have to regenerate the intermediate data from the raw data. It's analogous to rebuilding a database index to support new types of queries. The problem is that this data transformation is non-deterministic and may produce slightly different results each time it runs. This varying behavior was never a problem before because the old system only did aggregations once. Now we're stuck with no stable way to migrate to the new system. It's a crucial mistake in the original design of our raw data format. It's my fault.
The functional programmers out there, who naturally leverage the power of immutability, may be chuckling to themselves now. Isn't reproducibility an obvious goal of any well-designed system? Yes. But, looking back to 4 years ago when we started this project, we thought it was impossible to do this type of analysis on the fly. The non-determinism made sense. We were blind to this potential before. Since then we've learned a lot about the problem domain. We've gained intuition about statistics. We're better programmers.
In hindsight, we should have questioned our assumptions more. For example, we should have asked: What if it becomes possible someday to do X? What design decisions would we change? That would have been enough to inform the original design and prevent this problem with little cost the first time around. It's true that too much future-proofing leads to over-design and over-abstraction. But a little bit of future-proofing goes a long way. It's another variation of the adage, "an ounce of prevention is worth a pound of cure".
1. The non-determinism is caused by machine learning classifiers, which are retrained over time. This results in slightly different judgements between different builds of the classifier.
Response to "The Long-Term Problem With Dynamically Typed Languages"
I enjoyed this post quite a bit: "The Long-Term Problem With Dynamically Typed Languages". I think he's got some great points. I especially like the analogy with the "broken windows effect". It's interesting to hear about someone's experience using a software system or practice for a long time.
The best data point I have on this personally is my current project/team. The codebase is over 500KLOC now. The majority of it is in Python, followed by JS. I’ve been working on it since the beginning—over 4 years. We’ve built components and then extended them way beyond their original design goals. There’s a lot of technical debt. Some of it we’ve paid down through refactoring. Other parts we’ve rewritten. Mostly we live with it.
As time has gone on, we’ve gained a better understanding of the problem domain. The architecture of the software system we want is very different than what we have or what we started with. Now we’re spending our time figuring out how to get from where we are to where we want to be without having to rewrite everything from scratch.
I agree we have the lava layers problem the author describes, with multiple APIs to do the same thing. But I’m not sure if we would spend our time unifying them if we had some kind of miraculous tooling afforded by static types.
Our time is better spent reevaluating our architecture and enabling new use-cases. For example, one change we’ve been working towards reduces the turn-around time for a particular data analysis pipeline from 30 minutes to 1 millisecond (6 orders of magnitude). Now our product will be able to do a whole bunch of cool stuff that was impossible before. It took a lot of prototyping to get here. I don’t think static types would have helped.
My team’s biggest problem has always been answering the question: “How do we stay in business?” We've optimized for existence. We’ve had to adapt our system to enable product changes that make our users happy. Maybe once your product definition is so stable, like Google Search or Facebook Timeline, you can focus on a codebase that scales to 10,000 engineers and 10+ years of longevity. I haven't worked on such a project in my career. For me the requirements are always changing.
What I will say is when I first arrived at Google in 2005 I felt like I was stepping into the future. Tools like Docker, CoreOS, and Mesos are 10 years behind what Borg provided long ago, according to The New Stack's write-up. Following that delayed timeline, I wonder how long it will be before people realize that all of this server orchestration business is a waste of time?
Ultimately, what you really want is to never think about systems like Borg that schedule processes to run on machines. That's the wrong level of abstraction. You want something like App Engine, vintage 2008 platform as a service, where you run a single command to deploy your system to production with zero configuration.
Effective Pythonsold out thanks to PyCon 2015! My publisher had to do an emergency second printing a few weeks ago because it looked like this was a possibility. Luckily, the book will be restocked everywhere on April 17. Amazon still has some left in their warehouse in the meantime. I'm happy to know people are enjoying it!
Instead of writing a conference overview, here's an assorted list of links for things I saw, heard of, or wondered about while attending talks and meeting new people at PyCon this year. These are in no particular order and I'm missing a bunch I forgot.
"SageMath is a free open-source mathematics software system licensed under the GPL. It builds on top of many existing open-source packages: NumPy, SciPy, matplotlib, Sympy, Maxima, GAP, FLINT, R and many more. Access their combined power through a common, Python-based language or directly via interfaces or wrappers."
"ROS [Robot Operating system] is an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management."
"GeoJSON is a format for encoding a variety of geographic data structures."
"GeoPandas is an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types."
"N-grams to the rescue! A collection of unigrams (what bag of words is) cannot capture phrases and multi-word expressions, effectively disregarding any word order dependence. Additionally, the bag of words model doesn’t account for potential misspellings or word derivations."
"A Few Useful Things to Know about Machine Learning: This article summarizes twelve key lessons that machine learning researchers and practitioners have learned. These include pitfalls to avoid, important issues to focus on, and answers to common questions."
"diff-cover: Automatically find diff lines that need test coverage. Also finds diff lines that have violations (according to tools such as pep8, pyflakes, flake8, or pylint). This is used as a code quality metric during code reviews."
"Glue is a Python library to explore relationships within and among related datasets."
"Tabula: If you’ve ever tried to do anything with data provided to you in PDFs, you know how painful it is — there's no easy way to copy-and-paste rows of data out of PDF files. Tabula allows you to extract that data into a CSV or Microsoft Excel spreadsheet using a simple, easy-to-use interface."
"Blaze expressions: Blaze abstracts tabular computation, providing uniform access to a variety of database technologies"
"Ghost Inspector lets you create and manage UI tests that check specific functionality in your website or application. We execute these tests continuously from the cloud and alert you if anything breaks."
"The bytearray class is a mutable sequence of integers in the range 0 <= x < 256. It has most of the usual methods of mutable sequences, described in Mutable Sequence Types, as well as most methods that the bytes type has."
"Bug: use HTTPS by default for uploading packages to pypi"
If you ever wonder why people go to conferences, this is why. You get exposure to a wide range of topics in a very short period of time, plus you get to meet people who are excited about everything and inspire you to learn more.