Microdecisions and Typing

I remember learning to touch type when I was in high school. It was really useful – I could look at the screen instead of my keyboard when I typed, and that allows me to monitor what was going on.

That is, I learned to touch type through instant messenger and online games (where you don’t necessarily want to look down for a few seconds).

The problem with learning to touch type in that fashion was twofold:

  1. First, accuracy was less important than speed. This lead to a high (ish) error rate.
  2. Second, speed was achieved on “familiar” words; I got good at typing regular words, and the keys the words those used.

That was good enough through college – getting 50 words per minute was fine for essays, and my error rate was under 10%.

More recently, however, I’ve become aware of how frustrating it is to have a high error rate in typing.

Errors are nasty things. You need to watch closely for them, and then identifying them requires you to stop your chain of thought, correct it, and then move on. It’s mentally “draining” particularly for words you’re not really experienced with.

Worse, the less familiar with typing and the keyboard you are, the more you have to focus on articulating what’s in your brain – need to recall where the keys are, how to spell the word, and so on.

It’s not much – and in most cases, barely noticeable – but it adds friction to the process of getting things out of your head and onto the computer.

If you feel more comfortable with a pen and paper than with a keyboard, that’s probably because of the additional marginal energy you need to exert to put it through your input device (mouse/keyboard).

And, of course, the problem is more apparent when programming. Hitting brackets, equal signs, etc – not normally used in English – are very, very necessary in programming.

So, a few months ago I endeavored to learn to touch type. The good news: it’s not very difficult, doesn’t take that long, and works pretty well. The bad news: your typing speed drops a lot initially.

TypingWeb

For general purpose “learning to type” I found TypingWeb to be the best. The interface is clean, the lessons helpful.

Typing.io

For special characters, typing.io is excellent. You can type through code from open source projects, and – in the premium version – upload your own code and type through that.

Typing.io also has a really useful grading summary. It counts – unlike so many other typing tools – backspaces and eliminated characters to develop an efficiency score. That’s great – an error rate of, say 2% doesn’t fully encompass how much trouble you go to in order to fix the errors,. Counting incorrectly typed, collaterally typed, and backspaces does account for that trouble – and from experience, I’ll say that an error rate of 5% can lead to an overall efficiency of 80% – 85%. That’s not very good.

The above lesson was a little easier than most – Python, using variable names I’d typed quite a bit before – but coding feels so much smoother when the unproductive keystrokes are under 5% (usually, aka for Javascript or PHP, I’m at ~60 WPM with 9% – 12% unproductive keystrokes).

I’m going to keep practicing until I can get to > 75 WPM with < 3% error rate across multiple languages (PHP, Javascript, C, Clojure, Python, etc). At that point I think I won’t have to worry about what to type – it’ll just be seamless – and I’ll reduce the friction, very slightly, from getting things inside my head onto the computer.

The Future is Freelancing?

Recently, Shane Snow (CCO of Contently) penned an article called “Half of us May Soon Be Freelancing: 6 Compelling Reasons Why.”

Shane believes that the future of journalism is freelancing and is working to make that happen – he is the COO of Contently, which was founded to help freelance journalists succeed. Fortunately, he doesn’t adopt the position that all business will become freelance-based – he says that “I don’t believe the majority of businesses will ever become completely freelance or remote (core staff need to be in-house and work in proximity at any company of a certain size; local service-based businesses need people on site, though those can be freelancers).”

Quite right: there are reasons to have people on site, and to have employees on payroll.

To understand that, let me outline a different perspective on freelancing.

My go-to to understanding the formation and structure of businesses is Ronald Coase; specifically, his article on The Nature of the Firm. The basic premise is that a firm exists where it is cheaper to do transactions within a company than outside of a company. As a crude example, if you have a graphic designer in house you can ask them to do something; if you go outside the company, you normally have to deal with asking for a quote (which entails generating a RFQ, etc); additional overhead in billing; less commitment on resource allocation; difficulty meeting deadlines; and so on.

As a note: “transaction costs” include pretty much everything, from the costs of locating a freelancer, vetting the freelancer, risk of getting the wrong person, costs of communicating, etc.

For a company, hiring employee full time makes sense as long as you have the work to justify it – a company is basically negotiating a lower rate for buying in bulk, and committing to future purchases. Employees can agree because they decrease their risk (not having business) and increase their utilization (no more accounts receivables, marketing, lead generation, contract negotiation, etc).

So, there’s a natural place for freelancing: it’s where companies want less-than-FTE work of a certain kind, the transaction costs are sufficiently low, and the freelancer is not risk-averse and/or is in high demand.

Consequently, anything that reduces transaction costs will increase the rate of freelancing (of, if you’re feeling extra fancy, the “natural rate of freelancing”). Online marketplaces that make it easy to establish contracts and monitor work; online portfolios that show work done; ratings by people (freelancers rating companies and companies rating freelancers); and so on.

The fun factor is the internet, because the internet effectively expands the market – it removes much of the impact of geographical location. Not completely, because people still prefer in-person contact, but in general you’d expect that to be factored in rates or business allocation (so, a freelancer who is nearby will get selected over a remote freelancer).

Overall, I don’t think this is a particularly notable change in theory – but it certainly is notable in practice.

How much time in a college degree?

I’ve been going through some university lectures recently (Stanford SEE, iTunes U, and MIT OpenCourseWare) and, of course, I’ve created a spreadsheet to allow me to track completeness and prioritize.

Currently, I have 13 courses setup in my Excel spreadsheet, for a total of 308 lectures and 319 hours.

I wasn’t sure how to contextualize that number, so I did a rough check on lecture hours during my college years (something, oddly enough, I never did in college).

If you assume 18 credits per semester– where each credit is meant to map to one hour per week of lecture time – and have 8 semesters, which each averaging 13 weeks, that gives us 1,872 hours of lecture-time (the recommended 15-credits per semester works out to 1,560 hours)

That’s pretty inexact – for instance, many of my 4-credit courses at Skidmore had only two 1:20 classes per week, for under 3 hours per week. Others were spot-on (two 1:50 minute lessons/week) and others were more (e.g. with a lab).

If we round that number up and assume 2,000 hours of work – well, for starters, that’s close to the number of working hours in a year. It’s interesting to compare the learning value from one year of work with the learning value of all classes you attended in college. I understand that (i) it’s not directly comparable (building skills vs. knowledge) and (ii) work at college includes homework (5 hours per week? 10 hours? 20?).

Still, it’s a helpful benchmark in my mind, particularly when moving into a new domain that you’re unfamiliar with.