I'm Brett Slatkin and this is where I write about programming and related topics. Check out my favorite posts if you're new to this site. You can also contact me here or view my projects.

11 November 2017

Baby Names

A dataset I've been playing around with recently is the list of names from the US Social Security Administration, which goes from the year 1880 through 2016. I loaded the data into BigQuery and then made this visualization using Data Studio. It's been interesting looking up the names of my friends and family to see just how popular their names are now versus the time they were born. Here are some of the most interesting patterns that I've found in the data:


The Diva effect

When a pop star starts getting famous, people name their kids after them for the first few years. But at some point they become such a big deal that nobody uses that name anymore.



The Olympics effect

Most people have never heard of the name "Bode", but every four years during winter we get to see Bode Miller dazzle us all with his amazing skiing skills on TV, leading to a corresponding bump in names:



The Disaster effect

People stop naming their kids after anything associated with a disaster, be it natural or man-made. My prediction is the name "Harvey", which has had a recent resurgence, will significant decline after this year's hurricane.



Trendy names

There are many names that become extremely popular for a decade and then significantly decline. Some of these names come from movie characters or actors at the time.



Here's the link to the visualization again. Let me know if you find anything interesting!

07 May 2017

Link roundup #10

Another big backlog of links from the past few months. I need to get better at sending these in smaller digests.

03 April 2017

Discovering my inner curmudgeon: A Linux laptop review

Quick refresher: I'm a life-long Mac user, but I was disappointed by Apple's latest MacBook Pro release. I researched a set of alternative computers to consider. And, as a surprise even to myself, I decided to leave the Mac platform.

I chose the HP Spectre x360 13" laptop that was released after CES 2017, the new version with a 4K display. I bought the machine from BestBuy (not an affiliate link) because that was the only retailer selling this configuration. My goal was to run Ubuntu Linux instead of Windows.

Here are my impressions from using this computer over the past month, followed by some realizations about myself.


Ubuntu

Installing Ubuntu was easy. The machine came with Windows 10 preinstalled. I used Windows' built-in Disk Management app to shrink the main partition and free up space for Linux. I loaded the Ubuntu image on a USB drive, which conveniently fit in the machine's USB-A port (missing on new Macs). Then I followed Ubuntu's simple install instructions, which required some BIOS changes to enable booting from USB.

Screen

The 4K screen on the Spectre x360 is gorgeous. The latest Ubuntu handles High DPI screens well, which surprised me. With a combination of the built-in settings app and additional packages like gnome-tweak-tool, you can get UI controls rendering on the 4K display at 2x native size, so they look right. You can also boost the default font size to make it proportional. There are even settings to adjust icon sizes in the window titlebar and task manager. It's fiddly, but I got everything set up relatively quickly.

Trackpad

The trackpad hardware rattles a little, but it follows your fingers well and supports multi-touch input in Ubuntu by default. However, you immediately realize that something is wrong when you try to type and the mouse starts jumping around. The default Synaptics driver for Linux doesn't properly ignore palm presses on this machine. The solution is to switch to the new libinput system. By adjusting the xinput settings you can get it to work decently well.

But the gestures I'm used to, like two finger swipe to go back in Chrome, or four-finger swipe to switch workspaces, don't work by default in Ubuntu. You have to use a tool like libinput-gestures to enable them. Even then, the gestures are only recognized about 50% of the time, which is frustrating. The "clickpad" functionality is also problematic: When you press your thumb on the dual-purpose trackpad/button surface in order to click, the system will often think you meant to move the mouse or you're trying to multi-touch resize. Again: It's frustrating.

Keyboard

Physically the keyboard is good. The keys have a lot of travel and I can type fast. The left control key is in the far bottom left so it's usable (unlike Macs that put the function key there). The arrow keys work well. One peculiarity is the keyboard has an extra column of keys on the right side, which includes Delete, Home, Page Up, Page Down, and End. This caused my muscle memory for switching from arrow keys to home row keys to be off by one column. This also puts your hands off center while typing, which can make you feel like you're slightly reaching on your right side.

At first I thought that the extra column of keys (Home, Page Up, etc) was superfluous. But after struggling to use Sublime Text while writing some code, I realized that the text input controls on Linux and Windows rely on these keys. It makes sense that HP decided to include them. As a Mac user I'm used to Command-Right going to the end of line, where a Windows or Linux user would reach for the End key. Remapping every key to match the Mac setup is possible, but hard to make consistent across all programs. The right thing to do is to relearn how to do text input with these new keys. I spent some time trying to retrain my muscle memory, but it was frustrating, like that summer when I tried Dvorak.

Sound

The machine comes with four speakers: two fancy Bang & Olufsen speakers on top, and two normal ones on the bottom. The top speakers don't work on Linux and there's a kernel bug open to figure it out. The bottom speakers do work, but they're too quiet. The headphone jack worked correctly, and it would even mute the speakers automatically when you plugged in headphones. I believe this only happened because I had upgraded my kernel to the bleeding edge 4.10 version in my attempts to make other hardware functional. I figure the community will eventually resolve the kernel bug, so the top speaker issue is likely temporary. But this situation emphasizes why HP needs to ship their own custom distribution of Windows with a bunch of extra magical drivers.

Battery & power

Initially the battery life was terrible. The 4K screen burns a lot of power. I also noticed that the CPU fan would turn on frequently and blow warm air out the left side of the machine. It's hot enough that it's very uncomfortable if it's on your lap. I figured out that this was mostly the result of a lack of power management in Ubuntu's default configuration. You can enable a variety of powersaving tools, including powertop and pm-powersave. Intel also provides Linux firmware support to make the GPU idle properly. With all of these changes applied, my battery life got up to nearly 5 hours: a disappointment compared to the 9+ hours advertised. On a positive note, the USB-C charger works great and fills the battery quickly. It was also nice to be able to charge my Nexus X phone with the same plug.

Two-in-one

The Spectre x360 gets its name from the fact that its special hinges let the laptop's screen rotate completely around, turning it into a tablet. Without any special configuration, touching the screen in Ubuntu works properly for clicking, scrolling, and zooming. Touch even works for forward/back gestures that don't work on the trackpad. The keyboard and trackpad also automatically disable themselves when you rotate into tablet mode. You can set up Onboard, Gnome's on-screen keyboard, and it's decent. Screen auto-rotation doesn't work, but I was able to cobble something together using iio-sensor-proxy and this one-off script. Once I did this, though, I realized that the 16:9 aspect ratio of the screen is too much: It hurts my eyeballs to scan down so far vertically in tablet mode.

Window manager and programs

I haven't used Linux regularly on a desktop machine since RedHat 5.0 in 1998. It's come a long way. Ubuntu boots very quickly. The default UI uses their Unity window manager, a Gnome variant, and it's decent. I tried plain Gnome and it felt clunky in comparison. I ended up liking KDE the most, and would choose the KDE Kubuntu variant if I were to start again. Overall the KDE window manager felt nice and did everything I needed.

On this journey back into Linux I realized that most of the time I only use eight programs: a web browser (Chrome), a terminal (no preference), a text editor (Sublime Text 3), a settings configurator, a GUI file manager, an automatic backup process (Arq), a Flux-like screen dimmer, and an image editor (the Gimp). My requirements beyond that are also simple. I rely on four widgets: clock, wifi status, battery level, and volume level. I need a task manager (like the Dock) and virtual work spaces (like Mission Control / Expose). I don't use desktop icons, notifications, recent apps, search, or an applications menu. I was able to accommodate all of these preferences on Linux.


Conclusion

If you're in the market for a new laptop, by all means check this one out. However, I'll be selling my Spectre x360 and going back to my mid-2012 MacBook Air. It's not HP's fault or because of the Linux desktop. The problem is how I value my time.

I'm so accustomed to the UX of a Mac that it's extremely difficult for me to use anything else as efficiently. My brain is tuned to a Mac's trackpad, its keyboard layout, its behaviors of text editing, etc. Using the HP machine and Linux slows me down so much that it feels like I'm starting over. When I'm using my computer I want to spend my time improving my (programming, writing, etc) skills. I want to invest all of my "retraining" energy into understanding unfamiliar topics, like new functional languages and homomorphic encryption. I don't want to waste my time relearning the fundamentals.

In contrast, I've spent the past two years learning how to play the piano. It's required rote memorization and repeated physical exercises. By spending time practicing piano, I've opened myself up to ideas that I couldn't appreciate before. I've learned things about music that I couldn't comprehend in the past. My retraining efforts have expanded my horizons. I'm skeptical that adopting HP hardware and the Linux desktop could have a similar effect on me.

I'm stubborn. There will come a time when I need to master a new way of working to stay relevant, much like how telegraph operators had to switch from Morse code to teletypes. I hope that I will have the patience and foresight to make such a transition smoothly in the future. Choosing to retrain only when it would create new possibilities seems like a good litmus test for achieving that goal. In the meantime, I'll keep using a Mac.

10 December 2016

The Paradox of UX

A realization I had this week:

  • Software that costs money often has terrible UX, despite the developers having the revenue and resources to improve it.
  • In contrast, free/unpaid software often has great UX, even though users are unwilling to pay for it.

Why?

Paid software is worth buying because it solves an immediate need for the user. Developers of paid software are incentivized to put all of their energy into building more features to solve more problems that are worth paying for. There's no reason to improve usability as long as customers are satisfied enough to keep paying. Each time you make the software a little more complicated, you make it more valuable, leading to more revenue. It's self-reinforcing.

With free/unpaid software, the goal is to get the largest audience you can. The developers' revenue comes from indirect sources like advertising. The bigger their audience, the more money they earn. They maximize their audience by improving usability, broadening appeal, and streamlining. Each time the software gets a little easier to use, more people can start using it, leading to a larger audience, which generates more revenue. It's similarly self-reinforcing.


The conclusions I draw from this:

1. Competition drives usability. Free apps must have great UX because they need to compete against other free apps for your attention and usage. Paid apps that don't have competition can ignore usability because there's no alternative for users.

2. If the market of customers is big enough, competing paid software will emerge. Once it does, it's just a matter of time before all software in the space reaches feature parity. (e.g., Photoshop vs. Pixelmator, Hipchat vs. Slack, AutoCAD vs. SolidWorks, GitHub vs. Bitbucket).

3. If your paid software has capable competitors, you must differentiate with the quality of your user experience. You're fooling yourself if you think that you'll be able to stay ahead by adding incremental features over time.

01 December 2016

The JavaScript language continues to get bigger and more complex. Latest example. Please stop adding features to it!

12 November 2016

Building robust software with rigorous design documents

My work is centered around building software. In the past, I've been the primary designer and implementor of large software systems, collaborating with many engineers to launch programs into production. Lately, I've been spending much more of my time guiding others in their software design efforts.

Why design software at all? Why not just start writing code and see where it leads? For many problems, skipping design totally works and is the fastest path. In these cases, you can usually refactor your way into a reasonable implementation. I think you only need to design in advance once the scope of the software system you're building is large enough that it won't all fit in your head, or complex enough that it's difficult to explain in a short conversion.

Writing a design document is how software engineers use simple language to capture their investigations into a problem. Once someone has written a design document, a technical lead — often the same person who was the author of the document — can use it to set target milestones and drive an implementation project to completion.

I realized that I've never actually written down what I look for in a design document. So my goal in this post is to give you a sense of what I think it takes to write a design document, what the spectrum of design documentation looks like in my experience, and what I consider to be the crucial elements of good design.


How to get started

First, before writing any kind of formal documentation, you need to prototype. You need to gain experience working in the problem domain before you can establish any legitimate opinions.

The goal of making a prototype is to investigate the unknown. Before you start prototyping, you may have some sense of the "known unknowns", but understand little about them in practice. By prototyping, you'll improve your intuition so you can better anticipate future problems. You may even get lucky and discover some unknown unknowns that you couldn't have imagined.

Concretely, prototyping is getting the system to work end-to-end in one dimension (e.g., a tracer bullet implementation). It's working out and proving that the most confusing or risky part of the system is possible (e.g., the core algorithm). Or prototyping is dry-fitting all of the moving parts together, but without handling any of the complex edge cases. How you go about prototyping reflects the kind of problem you're trying to solve.


How to write a design document

The first draft of your "design document" is the code for your first working prototype. The second draft is a rough document that explains everything you learned from building that prototype. The third draft includes a proposal for a better design that addresses all of the difficulties you discovered while prototyping. You should share the third draft with the rest of your team to get their feedback. Then the final draft is a revision of the document that addresses all of the questions and concerns raised by your peers.

Design documents should be as short as possible. They should include enough detail to explain what you need to know and nothing more. Your design doc shouldn't include any code unless that code is critical for the reader to understand how the whole design fits together (e.g., an especially difficult algorithm that relies on a specific programming language's constructs).

There are five major sections that I recommend you have in a design document, and in this order:

1. Background

This is information that puts the design in context. You should assume that your reader knows very little about the subject matter. Here you should include everything they'll need to know about the problem domain to understand your design. Links to other design documents, product requirements, and existing code references are extremely useful.

When writing the background section, you should assume that it will be read by someone with no context. A couple of years from now, all of the knowledge that led to your implementation will likely be forgotten. You should treat the background section like it's a letter to the future about what you understood at this time in the past.

2. Goals and non-goals

These are the motivations for your project. Here you summarize the intentions of your proposed implementation. This section should explain the measurable impact of your design. You should provide estimates of how much you're going to help or hurt each quantifiable metric that you care about.

This section should also explicitly list the outcomes that you're not trying to achieve. This includes metrics you won't track, features you won't support, future functionality that isn't being considered yet, etc. Tracking non-goals is the primary way you can prevent scope creep. As your peers review your design document and bring up questions and comments, the non-goals section should grow accordingly to rule out entire areas of irrelevant investigation.

3. Overview

This section is a coarse explanation of what the software system is going to do. Engineers familiar with the problem and context should be able to read the overview and get a general sense of what the major moving parts of the design are. By reading the overview section, a fellow engineer should be able to formulate a set of important questions about the design. The purpose of the rest of the design document is to answer those questions in advance.

4. Detail

This section goes through each major component from the design overview and explains it in precise language. You should answer every reasonable question you can think of from the design overview. This is where you put things like sequence diagrams. You may also list step-by-step recipes that you'll employ in the software to solve the primary problem and various subproblems.

5. Risks

After reading the detailed design, your readers should have a sense of where your design may go wrong. It's a given that your system will fail to work in certain ways. You're making time vs. space trade-offs that are incorrect. Tolerances for the resources you need, or the wiggle room you'll have to accommodate changes will be insufficient. Edge-cases you ignored will turn out to be the most important functionality. In this section you should list how you anticipate your system will break, how likely you think those failures will be, and what you'll do to mitigate those problems before or when they occur.


What is the scope of a design document

After going through the distinct sections of a design document, there are still many open questions: How much detail should a design document include? How big of a scope should you address in a single design document? What should you expect a software engineer to produce on their own?

I'll answer these questions by trying to characterize the nature of problems that are solved by software engineers. You can identify distinct levels of software complexity by considering the size and shape of what's being confronted. Here's a conceptual diagram of what I consider to be the hierarchy of scope that software engineers handle:



The breakdown of this hierarchy is:

  • Market need: Broad category of goods and services that people and organizations desire.
  • Opportunity: Related ways of addressing those market needs.
  • Problem domain: Vast areas of complexity that must be understood and addressed to seize such opportunities.
  • Problem: Distinct issues in the problem domain that must be solved in order to advance towards the opportunity.
  • Subproblem: The many aspects of a larger issue that must be handled in order to solve the whole problem.

Here's a concrete example of what I mean with this hierarchy:

  • Market need: Hosting websites
  • Opportunity: Cloud computing
  • Problem domain: Virtual machines
  • Problem: CPU performance
  • Subproblems: context switching; cache invalidation

Some problems contain dozens of subproblems. Some problem domains contain hundreds of problems. Some opportunities contain vast numbers of related problem domains. And so on. This hierarchy diagram isn't meant to quantify the size ratios between these concepts. I'm trying to express their unique nature and the relationships between them.

The detail I expect to see in a design document, and thus the document's length, varies based on the scope of the project. Here's roughly the breakdown of design document length that I've seen in my career:

ScopeDesign document length
Subproblem500 words
Problem2,000 words
Problem domain8,000 words
Opportunity1,500 words
Market need1,000 words

What's surprising about this table is that designs addressing a problem domain are the most rigorous. The design detail required to handle a problem domain far exceeds that of any other scope. Design detail doesn't continue to increase as scope grows. Instead, as an engineer's scope expands to include multiple problem domains, multiple opportunities, and entire market needs, the level of detail I've seen in design documents plummets.

I think the reason these documents are so rigorous is that understanding a problem domain is the most difficult task an engineer can handle on their own. These design documents aren't immense because they're overly verbose, they're extremely detailed because that's what it takes to become an expert in a problem domain.

But there aren't enough hours in the day to become an expert in multiple areas. Once someone's scope gets large enough, they must start handing off problem domains to other engineers who can devote all of their time to each area. It's unrealistic for one person to write design documents for many problem domains. Thus, the design documents for larger scopes actually get smaller.


How to design for a whole problem domain

So what, exactly, goes into a design document for a problem domain? What makes these docs so detailed and rigorous? I believe that the hallmark of these designs is an extremely thorough assessment of risk.

As the owner of a problem domain, you need to look into the future and anticipate everything that could go wrong. Your goal is to identify all of the possible problems that will need to be addressed by your design and implementation. You investigate each of these problems deeply enough to provide a useful explanation of what they mean in your design document. Then you rank these problems as risks based on a combination of severity (low, medium, high) and likelihood (doubtful, potential, definite).

For each risk, you must decide:

  • Does it need to be mitigated in order to ship the minimum viable product?
  • Can it be addressed after shipping without hindering the product initially?
  • How will it be addressed if certain behaviors become worse over time?

You should not plan to mitigate every risk in advance; this is impossible because the scope of the problem domain you've taken on is too large and the unknowns are too complex. Instead, your design document should identify the most likely scenarios and outline potential mitigations for them. These mitigations can end up being large projects in themselves, and often need to be designed and implemented by dedicated teams of engineers.

To be more concrete about what a problem domain design document looks like, let's assume that you've taken on the cloud computing example from before. The problem domain you're addressing is "virtual machines". Here's what you'd cover in your risk assessment.

First, you'd enumerate the major concerns within this problem domain:

  • CPU performance
  • Security
  • Memory performance
  • I/O performance
  • and so on ...

Then you'd identify expected subproblems:

  • CPU Performance
    • Instruction cache thrashing due to context switching
    • Branch prediction failures
    • Kernel lock contention
  • Security
    • CPU instructions that can exploit ring 0
    • Multi-threading exploits to system calls
    • Shared address space between guest and host OS
    • DMA vulnerabilities
  • Memory performance
    • Data cache performance because of lack of CPU pinning and NUMA architectures
    • Page alignment conflicts between virtual machine address space and host OS address space
    • Endianness discrepancies
    • Oversubscribing memory to increase multi-tenancy
    • Memory compression and duplicate page merging
  • I/O performance
    • System call context switching overhead
    • Avoiding copies for network sends
    • Local disk access fairness
    • Latency vs. throughput interplay when the VM workloads on a single host machine are wildly different
  • and so on...

For each problem and subproblem you'd flesh out the potential solutions in the design document. You may write small programs to verify assumptions, do load tests to find the realistic limits of infrastructure, forecast reasonable estimates for capacity, etc. What you hope to learn during the design phase is if there are any major dealbreakers that could undermine the viability of your design.

For example, when digging into the I/O performance problem, you may find through experimentation that all VM guest operating systems will need to do a number of disk reads while idling. You may then measure how many reads each VM will need on average, and use that result to estimate the maximum number of VMs per physical machine and local disk. You may discover that local disk performance will severely limit your system's overall scalability. At this point you should document the reasoning that led you to this conclusion and show your work.

Once you've identified the potential dealbreaker, you should figure out if it's viable to launch your virtual machine product without first solving the local disk issue. Your proposed design should list what you'd do if demand grew too quickly, such as requiring a wait-list for new users, limiting each user to a maximum quota of allowed VMs, throwing money at the problem with more physical hardware, etc. Your document should explain the whole range of alternatives and settle on which ones are the most prudent to implement for launch.

By recognizing such a large problem in advance, you may also reach the conclusion that you need to build a virtual local disk system in order to release your product at all. That may severely delay your timeline because the problems you need to address for launch have become much larger than you originally anticipated. Or maybe you decide to launch anyways.

The point is that it's always much better to consider all the risks before you launch. It's acceptable to take risks as long as you're well informed. It's a disaster to learn about large risks once you're already on the path to failure.


Conclusion

Even though it's full of information, what's most impressive about a problem domain design document is that it doesn't feel like overdesign. The risk mitigations it includes are not overspecified. There's just enough detail to get a handle on the problem domain. When the engineering team begins implementing such a design, there's still a lot of flexibility to change how the problems are solved and how the implementation actually works.

In general, you're kidding yourself if you think that software will be built the way it was originally designed. The goal of these design documents isn't to provide the blueprints for software systems. The goal is to prepare your team for a journey into the unknown. The definitive yet incomplete nature of a problem domain design document is what makes it the pinnacle of good software design.

30 October 2016

Realistic alternatives to Apple computers

I'm disappointed with the new MacBook Pros and I wrote my thoughts about them here. Since the announcement, I've been researching all of my options and weighing the pros and cons. What follows comes from my own assessment of 16 laptops, their features, and reviews I've read about them. I'll highlight the ones which I think are the top five alternatives to Apple's computers. At the end there is a grid of all the options and links to more info. The machines I'm evaluating are either for sale right now or will be shipping by the end of the year. I'm not holding out for any rumored products.

These are the attributes that I think are important when choosing a new laptop:

Must have:

  • 13" form-factor
  • Thunderbolt 3 ports
  • Headphone jack
  • Works decently with Linux

Prefer:

  • HiDPI display (more than 200 pixels per inch)
  • 7th generation Core i7 CPU
  • 16 GB of RAM
  • USB-C ports

Ambivalent:

  • Flip form-factor (aka "2-in-1")
  • USB 3.0 old-style A connectors
  • More than 6 hours of battery life

Avoid:

  • Proprietary power plug (USB-C charging is better)
  • HDMI ports
  • SD card reader
  • Display port

It's worth emphasizing how valuable Thunderbolt 3 is. With its 40Gbps transfer rate, "external GPU" enclosures have become a real thing and the options are increasing. In 2017, you should expect to dock your laptop into a gnarly GPU and use it for some intensive computation (VR, 3D design, neural network back propagation). Thunderbolt 3 also makes it easy to connect into one or more 4K+ external displays when you're not on the go. Not having Thunderbolt 3 significantly limits your future options.

The other details to look for are Skylake (6th generation) vs. Kaby Lake (7th generation) processors, and Core i5/i7 vs. Core M processors. The differences are subtle but meaningful. All of the new MacBook Pros and the MacBook 12" have 6th generation CPUs. The MacBook Pros have i5/i7 chips. The 12" MacBooks have m3/m5/m7 chips. It's a bit odd that the latest and greatest from Apple includes chips that were released over a year ago.


Here's my list of options, ordered by which ones I'm most seriously considering:


1. HP Spectre x360

Official product page and someone else's review that I found helpful.

It doesn't have a HiDPI display, but everything else looks sleek and great. The previous year's model was also available in a 4K version, but that doesn't have any Thunderbolt 3 ports. If they do release a variation of the new one in 4K, that model would be the winner for me by every measure.



Price$1,299
ProsTwo Thunderbolt 3 ports. Charge via USB-C. 2-in-1 laptop.
ConsHD Display.
Thickness13.71mm
Weight2.86lbs
Battery57Wh
Display1920 x 1080 (Touch)
CPUIntel 7th Generation Core i7-7500U dual core
RAM16GB
Storage512GB Flash
GraphicsIntel HD Graphics 620
Power plugUSB-C
Thunderbolt 3 ports2
USB-C (non-Thunderbolt) ports0
USB 3.0 A ports1
SD slotsNone
Video portsNone
Audio portsHeadphone/mic jack


2. Razer Blade Stealth 4K

Official product page and someone else's review that I found helpful.

With extra ports and a thick bezel it's not as svelte as I'd like. But the build quality seems high and I bet the 4K display looks awesome. Razer's Core external GPU is the easiest setup of its kind right now. There's also a cheaper option for $1,249 with less storage and a 2560 x 1440 screen (which is HiDPI like a MacBook but not close to 4K).



Price$1,599
Pros4K display. One Thunderbolt 3 port. Charge via USB-C.
ConsNo USB-C ports besides the single Thunderbolt 3 one. Unnecessary video out. Big bezel around a small physical screen.
Thickness13.1mm
Weight2.84lbs
Battery53.6Wh
Display3840 x 2160 (Touch)
CPUIntel 7th Generation Core i7-7500U dual core
RAM16GB
Storage512GB Flash
GraphicsIntel HD Graphics 620
Power plugUSB-C
Thunderbolt 3 ports1
USB-C (non-Thunderbolt) ports0
USB 3.0 A ports2
SD slotsNone
Video portsHDMI
Audio portsHeadphone/mic jack


3. Dell XPS 13

Official product page and someone else's review that I found helpful.

This laptop has a modern edge-to-edge screen, but it's not quite 4K. I wouldn't look forward to lugging around the Dell-specific power cable (and being screwed when I lose it). Update: Blaine Cook corrected me in the comments: It turns out that it can charge via USB-C in addition to the proprietary power plug. Hooray! — Its ports, slots, and camera are a bit quirky. But, strongly in favor, it's also the laptop that Linus uses! There's a cheaper version with less storage and a slower i5 CPU for $1,399.



Price$1,849
ProsOne Thunderbolt 3 / USB-C port. Nearly 4K display.
ConsExpensive. Unnecessary SD card slot. Proprietary power plug. Webcam is in a weird location.
Thickness9-15mm
Weight2.9lbs
Battery60Wh
Display3200 x 1800 (Touch)
CPUIntel 7th Generation Core i7-7500U dual core
RAM16GB
Storage512GB Flash
GraphicsIntel HD Graphics (unspecified version)
Power plugProprietary
Thunderbolt 3 ports1
USB-C (non-Thunderbolt) ports0
USB 3.0 A ports2
SD slotsSD slot
Video portsNone
Audio portsHeadphone/mic jack


4. HP EliteBook Folio G1

Official product page and someone else's review that I found helpful.

This machine is tiny, fanless, and looks like a MacBook Air at first glance. It has Thunderbolt 3 and none of the old ports weighting it down. And 4K! The biggest drawback is that the CPU is a 6th generation Core M processor instead of an i5 or i7. If the 12" MacBook is more your speed than the MacBook Pro, then this could be the right machine for you.



Price$1,799
ProsCharge via USB-C. Two Thunderbolt 3 ports. 4K display.
ConsExpensive. Underpowered 6th-generation M CPU. Max 8GB of RAM.
Thickness11.93mm
Weight2.14lbs
Battery38Wh
Display3840 x 2160
CPUIntel 6th Generation m7-6Y75 dual core
RAM8GB
Storage256GB Flash
GraphicsIntel HD Graphics 515
Power plugUSB-C
Thunderbolt 3 ports2
USB-C (non-Thunderbolt) ports0
USB 3.0 A ports0
SD slotsNone
Video portsNone
Audio portsHeadphone/mic jack


5. Lenovo Yoga 910

Official product page and someone else's review that I found helpful.

If this had a Thunderbolt 3 port, I think it would be the laptop to get. It has a 4K screen and the styling looks great. Unfortunately, instead of Thunderbolt 3, Lenovo included a USB-C port that only speaks USB 2.0 protocol (not a typo, it's version two) and is used for charging. There's a cheaper option with less storage and RAM for $1,429.



Price$1,799
ProsTwo USB-C ports. Charge via USB-C. 4K display. 2-in-1 laptop.
ConsNo Thunderbolt 3 ports. Small battery. Expensive. One of the USB-C ports is a USB 2.0 port.
Thickness14.3mm
Weight3.04lbs
Battery48Wh
Display3840 x 2160 (Touch)
CPUIntel 7th Generation i7-7500U dual core
RAM16GB
Storage1TB Flash
GraphicsIntel HD Graphics 620
Power plugUSB-C
Thunderbolt 3 ports0
USB-C (non-Thunderbolt) portsone 3.0 port, one 2.0 port
USB 3.0 A ports1
SD slotsNone
Video portsNone
Audio portsHeadphone/Microphone combined jack


Conclusion

I'm still not sure which computer I'm going to get. I'm now looking through Linux distributions like Ubuntu and elementary OS to see what compatibility and usability are like. I doubt that 2017 will be the "year of the Linux laptop", but for the first time I'm willing to give it an honest try.

Make no mistake: I think that Apple computers are still gorgeous and a great choice for people who have the budget. I plan to continue recommending MacBooks to family members, friends, acquaintances, and all of the other non-technical people in my life. I think "it just works" is still true for the low-end, and that's ideal for consumers. But consumers have very different needs than professionals.

For a long time, Apple has been a lofty brand, the "insanely great" hardware that people bought because they aspired to "think different". It's looking like that era may be over. Apple may have completed their transition into a mass-market company that makes relatively high quality hardware for normal people. There's nothing wrong with that. But it's probably not for me.


Here's the full list of the computers I considered, in the order I ranked them:

ModelProsConsPrice
HP Spectre x360Two Thunderbolt 3 ports. Charge via USB-C. 2-in-1 laptop.HD Display.$1,299
Razer Blade Stealth 4K4K display. One Thunderbolt 3 port. Charge via USB-C.No USB-C ports besides the one Thunderbolt 3 one. Unnecessary video out. Big bezel.$1,599
Razer Blade Stealth QHDHiDPI display. One Thunderbolt 3 port. Charge via USB-C.No USB-C ports besides the one Thunderbolt 3 one. Unnecessary video out. Big bezel.$1,249
Apple MacBook Pro 13" with upgradesTwo Thunderbolt 3 ports. Charge via USB-C. HiDPI display. Good video card.6th generation CPU. Expensive.$1,999
Apple MacBook Pro 13"Two Thunderbolt 3 ports. Charge via USB-C. HiDPI display. Good video card.Underpowered i5 CPU. 6th generation CPU. Expensive.$1,499
Dell XPS 13 with upgradesOne Thunderbolt 3 / USB-C port. Nearly 4K display.Expensive. Unnecessary SD card slot. No USB-C ports. Proprietary power plug.$1,849
Dell XPS 13One Thunderbolt 3 / USB-C port. Nearly 4K display.Underpowered i5 CPU. Unnecessary SD card slot. No USB-C ports. Proprietary power plug.$1,399
HP EliteBook Folio G1 Notebook PCCharge via USB-C. Two Thunderbolt 3 ports. 4K display.Expensive. Underpowered 6th-generation M CPU. Max 8GB of RAM.$1,799
Lenovo Yoga 910 with upgradesTwo USB-C ports. Charge via USB-C. 4K display. 2-in-1 laptop.No Thunderbolt 3 ports. Small battery. Expensive. One of the USB-C ports is a USB 2.0 port.$1,799
Lenovo Yoga 910Two USB-C ports. Charge via USB-C. 4K display. 2-in-1 laptop.No Thunderbolt 3 ports. Small battery. One of the USB-C ports is a USB 2.0 port. Only 8GB of RAM.$1,429
Apple 12" MacBookCharge via USB-C. HiDPI display.6th generation CPU. Poor webcam. Only one USB-C port. No Thunderbolt 3 ports. Expensive. Only 8GB of RAM.$1,749
Asus ZenBook UX306 13"USB-C port. Nearly 4K display.Only one USB-C port. No Thunderbolt 3 ports. Proprietary power plug. Unnecessary video out ports. 6th generation CPU.Goes on sale any day now
Acer Swift 7Charge via USB-C. Two USB-C ports.No Thunderbolt 3 ports. HD display. Underpowered i5 CPU. Small battery.$1,099
HP Spectre 13Two Thunderbolt 3 ports. Charge via USB-C.HD Display. Only 8GB of RAM available. Small battery.$1,249
Asus ZenBook 3 UX390UACharging via USB-C. Very small.HD display. Only one USB-C port. No Thunderbolt 3 ports. Expensive. Small battery.$1,599
Asus ZenBook Flip UX360CAOne USB-C port. 2-in-1 laptop.Underpowered m3 CPU. 6th generation CPU. HD Display. No Thunderbolt 3 ports. Proprietary power plug. Unnecessary SD slot. Unnecessary video out port.$749
Microsoft Surface BookNearly 4K display. Surface pen included. 2-in-1 laptop.No USB-C ports. No Thunderbolt 3 ports. Unnecessary video ports. Unnecessary SD slot. Expensive. Underpowered i5 CPU. 6th generation CPU.$1,499
Lenovo ThinkPad X1 Carbon 4th Generation 14"HiDPI display.No USB-C ports. No Thunderbolt 3 ports. Too many video out ports. Unnecessary SD slot. 6th generation CPU.$1,548
Samsung Notebook 9 spinNearly 4K display. 2-in-1 laptop.Unnecessary SD slot. Unnecessary video out port. No USB-C ports. No Thunderbolt 3 ports. Small battery. 6th generation CPU. 8GB maximum RAM.$1,199

28 October 2016

Lamenting "progress"

The new MacBook Pros were released. Many people I respect have already expressed what I'm feeling. Here's a small sample:

"I waited so long for new macbooks and now I feel like I don't want one :("Armin Ronacher

"<sigh> I guess I will keep the duct tape on my 2013 MBP a bit longer. The bag of dongles was not what this road warrior was looking for."Werner Vogels

"Hi, @microsoft? Listen, I know we haven't talked for a while, and I said some... things, but... Do you want to get a coffee?"Mark Nottingham (in reference to the Surface Studio)


My current personal machine is a 4+ year old 13" MacBook Air. I often hook it into a Thunderbolt Display, Natural Keyboard, and Magic Mouse (the one without the charging wire). The laptop is showing signs of physical wear, the battery has been losing capacity, and it's clear that I'll need a replacement soon.

When I look at the new Apple computers, my choices appear to be limited. The Touch Bar strikes me as a useless gimmick. I never look at my keyboard because I already know how to use it. I philosophically disagree with the idea of looking at your keyboard to comprehend its interface. That's not the purpose of an input device. You don't look at your mouse before you click, do you? Even if the programs I use the most (Sublime, Terminal, Chrome) integrated with the Touch Bar, I can't foresee how that would benefit me. I can only imagine that the Touch Bar's flicker during program changes would be an annoying distraction.

That means the only two Apple machines I'd consider buying are the 12" MacBook (with one USB-C port) and the 13" MacBook Pro (with the escape key and two Thunderbolt ports). The CPU, RAM, and graphics specs of these machines are essentially the same as my 2012 MacBook Air. The price is the same or higher. The ports they have provide far fewer options. The only significant improvement is that the screens are high-DPI displays. I'm disappointed.

I've been using Apple computers for nearly 30 years. I played Lunar Lander on a Mac Plus. I wrote my first Logo program on a IIgs. I dialed my first modem on a IIsi. I edited my first video on a 7100. I built my first webpage on a Performa. I wrote my first Rhapsody app on an 8600. I earned my degree on a G5. For the past 11 years as a professional programmer, I've written code on a Mac. I wrote my book on a Mac.

And now I feel that this chapter has come to an end.

For the first time, I am seriously considering not buying an Apple computer as my next machine. I can't believe it.

01 October 2016

Despite the reviews, Lo and Behold by Werner Herzog was bad: clueless questions, no narrative. He wasted his interviewees' time and mine.

30 August 2016

How you learn to become a better programmer

I received a nice email last week from a professor at a major university, who asked:

I'm wondering if you can share with me some advice on how to train inexperienced graduate students to be productive in writing quality code in a short period of time.

First: I'm flattered that this person thought I have any advice worth giving! My second thought was: It's impossible! But after thinking it over, I came up with one suggestion. In hopes that someone else finds my reply useful, here's what I wrote in return (with a bit more editing):


I think the best thing for improving code quality is to require your students to write a corresponding set of unittests along with any code they write. Python has a wonderful unittesting module that's built-in, as well as a good mocking library. I would encourage your students to also run a test coverage tool periodically to understand what parts of their code isn't tested.

Unittests are like an insurance policy for change. They're especially important for languages like Python that have no other way to verify correctness at even a superficial level. When your code has unittests, it becomes a lot easier to modify and expand a program over time while preserving confidence in the functionality you've already built.

Even when I write code in my free time for fun or personal projects — here's a recent example — I still write tests because they help me build functionality faster overall. The time it takes to write tests is far less than the time it takes to fix all of the bugs introduced unknowingly when you don't have tests. It's worth acknowledging that testing may feel like a waste of effort sometimes, but it's important for your students to understand that taking the time to write tests will be more efficient overall for any program that's non-trivial.

My last piece of advice is this: Sometimes people say that something was too hard to test, so they just skipped writing a test for it. This is exactly the wrong conclusion. If your code is too hard to test, that means your code is bad. The solution to every problem — no matter how hard the problem is — can be easy to test. If your program is not easy to test, then your code needs to be refactored or rewritten to make it easy. Doing that is how you learn to become a better programmer.

27 August 2016

Link roundup #7

Zero-cost futures in Rust:
Standardizing how futures / deferreds work in a language is a good idea. Python did something similar (and beyond) with asyncio and PEP 3156. JavaScript / ECMAScript 6 also defined Promises. I'm happy to see Rust do this early. I think there are some details that will make this tricky in practice since Rust doesn't have GC, so we'll see.

Google’s QUIC protocol: moving the web from TCP to UDP:
My skills are officially obsolete. I know HTTP 1.1 and TCP pretty well. I really need to understand the details of HTTP 2.0 and QUIC beyond the high-level architecture. I don't want to become a dinosaur who only knows UUCP or XNS. I've often wondered what it feels like to be an old, but still working programmer. This is probably part of it.

Working remotely:
This is a wonderful guide on how to be a thoughtful collaborator. Except for the "Before you get hired" section, almost all of the advice applies to non-remote (local?) working as well.

PyFlux:
Really cool library for working with time series in Python. See this Jupyter notebook for some compelling examples. I'm happy to see it works with Python 3 and is built on NumPy, SciPy, and Pandas.

What’s New in C# 7.0:
I surprisingly enjoyed using C# last year after ignoring it for years. It appears that the language is getting even more features with the next release. I don't think that's a good thing; some of what they're introducing seems overly complicated (e.g., out variable declarations).

06 August 2016

Link roundup #6

"With two inputs, a neuron can classify the data points in two-dimensional space into two kinds with a straight line. If you have three inputs, a neuron can classify data points in three-dimensional space into two parts with a flat plane, and so on. This is called 'dividing n-dimensional space with a hyperplane.'"
Understanding neural networks with TensorFlow Playground

"We were looking to make usage of Kafka and Python together just as fast as using Kafka from a JVM language. That’s what led us to develop the pykafka.rdkafka module. This is a Python C extension module that wraps the highly performant librdkafka client library written by Magnus Edenhill."
PyKafka: Fast, Pythonic Kafka, at Last!

"To satisfy this claim, we need to see a complete set of statically checkable rules and a plausible argument that a program adhering to these rules cannot exhibit memory safety bugs. Notably, languages that offer memory safety are not just claiming you can write safe programs in the language, nor that there is a static checker that finds most memory safety bugs; they are claiming that code written in that language (or the safe subset thereof) cannot exhibit memory safety bugs."
"Safe C++ Subset" Is Vapourware

"For each mutant the tool runs the unit test suite; and if that suite fails, the mutant is said to have been killed. That's a good thing. If, on the other hand, a mutant passes the test suite, it is said to have survived. This is a bad thing."
Mutation Testing

"This meant that literally everything was asynchronous: all file and network IO, all message passing, and any “synchronization” activities like rendezvousing with other asynchronous work. The resulting system was highly concurrent, responsive to user input, and scaled like the dickens. But as you can imagine, it also came with some fascinating challenges."
Asynchronous Everything

01 August 2016

I found this gem from 2006 that explains how the Linux kernel interacts with memory barriers. Amazingly thorough!

25 July 2016

Don't hold meetings on Mondays

A couple of months ago I stopped letting people book meetings with me on Mondays before noon. I used to get anxiety on Sunday nights because I'd worry about preparing for my meetings the next day. Now I wake up Monday, have multiple hours to better prepare for the week, and my Sundays are as lazy as they should be.

05 June 2016

Links from PyCon 2016 in Portland

Here are links for things I saw, heard about, or discovered during PyCon 2016 in Portland. These are in totally random order.


I also have a set of links from last year's PyCon as well.

07 May 2016

Link roundup #5

"The new shared memory type, called SharedArrayBuffer, is very similar to the existing ArrayBuffer type; the main difference is that the memory represented by a SharedArrayBuffer can be referenced from multiple agents at the same time. (An agent is either the web page’s main program or one of its web workers.) The sharing is created by transferring the SharedArrayBuffer from one agent to another using postMessage..."
A Taste of JavaScript’s New Parallel Primitives

"Compared to OS-provided locks like pthread_mutex, WTF::Lock is 64 times smaller and up to 180 times faster. Compared to OS-provided condition variables like pthread_cond, WTF::Condition is 64 times smaller. Using WTF::Lock instead of pthread_mutex means that WebKit is 10% faster on JetStream, 5% faster on Speedometer, and 5% faster on our page loading test."
Locking in WebKit

"uvloop makes asyncio fast. In fact, it is at least 2x faster than nodejs, gevent, as well as any other Python asynchronous framework. The performance of uvloop-based asyncio is close to that of Go programs."
uvloop: Blazing fast Python networking

"The bias-variance trade-off appears in a lot of different areas of machine learning. All algorithms can be considered to have a certain degree of flexibility and this is certainly not specific to kNN. The goal of finding the sweet spot of flexibility that describes the patterns in the data well but is still generalizable to new data applies to basically all algorithms."
Misleading modelling: overfitting, cross-validation, and the bias-variance trade-off

"Simple yet flexible JavaScript charting for designers & developers"
Chart.js

10 April 2016

What's awful about being a {software engineer, tech lead, manager}?

I've been building software professionally for over 10 years now. I love what I do and I hope to be an old programmer someday. But along the way, I've encountered many terrible things that have made me hate my job. I wish that someone had given me a roadmap of what to expect earlier in my career, so when some new and unfortunate awfulness occurred that I wouldn't have felt so alone and frustrated.

This post is meant to be such a guide. I have three goals.

The first goal is to look back: To identify experiences we both may have had in the past. These will help us establish some common ground of understanding. They'll serve as reference points to judge other unfamiliar problems.

The second goal is to look forward: To identify new issues that you may have not experienced yet, but likely could in the future depending on your path. I hope these items will help you prepare for what's coming and decide for yourself what's worth pursuing.

The third goal is to help you empathize and have mutual respect for the difficulties your teammates are facing. You may never endure many of the forward looking-items, especially if you're not a tech lead or manager. Similarly, if you are a tech lead or manager, you may have forgotten what it feels like to be an individual contributor; you may be out of touch with the day-to-day realities. I want to help everyone get on the same page.


The lists in the sections below are not in order of priority. They include observations that other people have told me about; they're not necessarily things I've experienced directly. So if you and I have worked together in the past, please don't assume that a particular example is about you. It's amazing how common these stories are.

It's also important to note that there are other categories of horrible things that this post does not confront at all: racism, sexism, ageism, aggression, and many other factors that contribute to a hostile work environment. I'm not qualified to write about these topics, and they've been described and analyzed thoughtfully elsewhere.


My objective in writing this post is to enumerate what follows from the nature of building software in teams. If you think I missed anything, please let me know. I can imagine that many of these points, especially in the lead and manager lists, also apply to other disciplines. And please keep in mind that these roles aren't all bad; my next post on this subject will be about the good things.


What's awful about being a software engineer?

For an individual contributor who writes code and is directed by a tech lead or manager.

  • There's just too much to learn and not enough time
  • The code is poorly written
  • The current abstractions are bad
  • I would have done this differently
  • The comments don't make any sense, aren't up-to-date
  • No documentation about how something was built or why it works this way
  • The build is slow
  • The tests are slow
  • The tests are flaky
  • There are no tests
  • Bad frameworks that require a lot of boilerplate, complex code, or confusing tests
  • Managers want me to sacrifice code quality for development speed
  • Dependencies change without notice
  • Differences between local dev, testing, and production
  • Getting ratholed on a problem or debugging for a long time
  • Broken or flaky tests that I need to modify but didn't write originally
  • Bugs or production issues that I have to deal with that other people caused, but they aren't actively trying to fix right now
  • Having to maintain someone else's crappy code or systems after they leave
  • Things that aren't automated that should be
  • Getting interrupted constantly by teammates and my manager
  • Context switching costs
  • My manager asks me to work on emergency projects
  • In code reviews my teammates are assholes and it feels like a personal attack
  • Other people are late in delivering the functionality that I need to do my job
  • Other engineers build their features or components too slowly
  • I have to wait for other people a lot
  • There are product decisions that I don't agree with
  • I feel like I'm just getting told what to do
  • No autonomy
  • Nobody respects my opinion
  • I work my ass off and then someone tells me to redo it
  • Product managers change requirements on me because they're overly reactive to criticism or feedback from other people


What's awful about being a tech lead?

For a software engineer who writes code and also leads the design and implementation work of a small group of individual contributors (who are managed by someone else).

  • Everything in production is broken all the time
  • Too many emails or documents to read and respond to
  • Work slips through the cracks
  • Falling behind on everything
  • Other people are making technical design decisions that I don't agree with, but I don't have the ability or authority to convince them to change their minds
  • Implementations that are sloppy or ignore existing best practices
  • Things coming up that I didn't plan for; late feature requirements that break my assumptions
  • I can get really stressed out about deadlines and dependencies, which makes it hard to unwind when I'm home from work
  • Everyone needs more supervision than I expect, no matter how hard I try to explain the details or document the plan
  • Launching something publicly takes forever and is blocked for bullshit non-technical reasons
  • Making the difficult choice between time and quality; deliberately shipping known bugs to production
  • I'm being responsible, why isn't everyone else?
  • I'm falling behind on my responsibilities and nobody is helping me
  • I don't understand what my manager does all day, but I don't think it's useful
  • I don't understand what the product managers do all day, but I don't think it adds value
  • It feels like other engineers on my team are trying to undermine me by not following the plan we already agreed to; I feel like a tattletale when I talk to their managers about it
  • Projects I thought would be my responsibility were taken away from me and given to someone else for reasons I don't understand
  • I don't have enough engineers working on my project to get the work done in a reasonable amount of time
  • People don't listen when I say how hard something will be and they're unwilling to reduce scope


What's awful about being an engineering manager?

For someone who manages a group of software engineers. This person may also be the tech lead, or manage tech leads who direct their reports.

  • It's hard to ask or tell people what to do without feeling like an asshole
  • It feels like everything is an emergency all the time
  • It feels like everyone is always complaining to me all day long
  • I have zero time for email
  • I have zero time for chit chat, even though I feel like an asshole for not being more social
  • When I get home I feel beaten up; sometimes it can be too much; if my significant other or people close to me are having issues they want to talk about, I can be so burnt out by the time I leave work that I'm unable to listen to their problems anymore.
  • At all times, some number of my reports are in one or more of these states:
    • About to quit
    • Upset at someone else on my team
    • Upset at someone else on another team
    • Upset with me
    • Offended by someone for good reason
    • Offended by someone for no good reason
    • Unhappy with the codebase for legitimate reasons
    • Unhappy with the codebase for perfectionist / invalid reasons
    • Unhappy with their project and want to work on something else, even though what they were doing is the most important thing
    • Having personal issues that are affecting their well-being, often causing them to have a negative effect on the morale of those around them
    • Bored; clear they'd take a new job if the right one was offered to them
  • Other managers do work by scheduling meetings. They can't write code; their only way to influence things is to talk. So I'm pulled into a bunch of useless meetings. And they almost always feel like a waste of time.
  • Writing less code sucks; it feels like I'm losing my edge. Sometimes it's hard to see how I'm contributing. I have to change my perspective on what I value. Finding satisfaction in helping others become more productive feels unnatural.
  • I'm going to miss making an important technical decision and things will go terribly wrong
  • A project is going to fall behind or fail because I delegated it to the wrong person
  • It feels like other managers are trying to undermine me with politics
  • My biggest problems are confidential and I can't ask for support or advice from anyone
  • It's unclear what the CTO/VP of engineering does; they don't seem to add any value; they ask ignorant questions and are generally disrespectful
  • Some of my best engineers are wasting their time on projects that don't matter, but I don't want to stop them from doing it in fear that it'll push them away from the team and lead them to quit
  • Everyone disagrees with at least some part of how I'm managing the team


Thanks to Ben Kamens, Rafe Colburn, Katrina Sostek, and Troy Trimble for reviewing the content of this post.

09 April 2016

Link roundup #4

"Anyone who interacts with process has a choice. You can either blindly follow the bulleted lists or you can ask why. They’re going to ignore you the first time you ask, the second time, too. The seventh time you will be labeled a troublemaker and you will run the risk of being uninvited to meetings, but I say keep asking why. Ask in a way that illuminates and doesn’t accuse. Listen hard when they attempt to explain and bumble it a bit because maybe they only know a bit of the origin story."
The Process Myth (2013)

"You can think of this functionality of triggering computation based on database changes as being analogous to the trigger and materialized view functionality built into databases but instead of being limited to a single database and implemented only in PL/SQL, it operates at datacenter scale and can work with any data source."
Introducing Kafka Streams: Stream Processing Made Simple

"Consider being lost in an endless desert. If you see an oasis in the distance, you head towards it even if the water is brackish and has camel dung floating in it. Bernstein et al are the oasis (or perhaps the mirage of an oasis), in an endless desert of cryptosystems and implementations of cryptosystems that keep breaking. So the (pending) Bernstein monoculture isn't necessarily a vote for Dan, it's more a vote against everything else."
On the Impending Crypto Monoculture

"The opportunity cost of email makes a postage stamp look cheap."
How to send and reply to email

"Caravel is a data exploration platform designed to be visual, intuitive and interactive."
Caravel (by Airbnb)

19 March 2016

Link roundup #3

"If you follow the tips in the official porting howto, then you can port your code file by file and simply do it slowly so that the problem at least stops growing for you."
How to pitch Python 3 to management

"In this series of articles, I want to show the way OpenGL works by writing its clone."
How OpenGL works: software renderer in 500 lines of code

"We can change the way people debug software, and in its own way that may be as important as my Web platform work, and it's work I desperately want to do."
Leaving Mozilla (to work on rr)

"Why do people sometimes discuss "the stack" like it's some kind of revered fundamental object?"
What is "the stack"?

"As a result, investors will change their lens from focusing solely on revenues and growth to also look at unit economics and burn rate. Founders will begin to make changes in core operating principles and resource allocation that might impact the lives of hundreds or even thousands of dedicated employees, vendors and customers. And ultimately, stronger companies will result. Don’t get me wrong, evolving from a unicorn into a cockroach will be extremely painful — but just like Mark Watney on Mars, the sooner you realize the situation on the ground has changed, the more time you have to “science the shit out of the problem” and succeed."
First Round Capital's letter to their Limited Partners
© 2009-2017 Brett Slatkin