Speed of Light
The Future Programming Manifesto  

Jonathan Edwards:

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

It’s a Coup  

Michael Tsai:

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

Questions!

Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?

Bicycle

The bicycle is a surprisingly versatile metaphor and has been on my mind lately. Here are all the uses I could think of of the bicycle as a metaphor used when talking about computing.

Perhaps the most famous use is by Steve Jobs, who apparently wanted to rename the Macintosh to “Bicycle”. Steve explains why in this video:

I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

On the other end of the metaphor spectrum, we have Doug Engelbart, who believed powerful tools required powerful training for users to realize their full potential, but that the world is more satisfied with dumbed down tools:

[H]ow do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain. It’s still the orientation of automating what you used to do instead of moving to a whole new domain in which you are obviously going to learn quite a few new skills.

And again from Belinda Barnet’s Memory Machines:

[Engelbart]: ‘Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options.’

This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult; there is resistance to learning new techniques, especially if they require changes to the human system. But the extra mobility we could gain from particular technical objects and techniques makes it worthwhile.

Finally, perhaps my two favourite analogies come from Alan Kay. Much like Engelbart, Alan uses the bicycle as a metaphor for learning:

I think that if somebody invented a bicycle now, they couldn’t get anybody to buy it because it would take more than five minutes to learn, and that is really pathetic.

But Alan has a more positive metaphor for the bicycle (3:50), which gives me some hope:

The great thing about a bike is that it doesn’t wither your physical attributes. It takes everything you’ve got, and it amplifies that! Whereas an automobile puts you in a position where you have to decide to exercise. We’re bad at that because nature never required us to have to decide to exercise. […]

So the idea was to try to make an amplifier, not a prosthetic. Put a prosthetic on a healthy limb and it withers.

Humans Need Not Apply  

Here’s a light topic for the weekend:

This video combines two thoughts to reach an alarming conclusion: “Technology gets better, cheaper, and faster at a rate biology can’t match” + “Economics always wins” = “Automation is inevitable.”

This pairs well with the book I’m currently reading, Nick Bostrom’s Superintelligence and there’s an interesting discussion on reddit, too.

It’s important to remember even if this speculation is true and in the future humans are largely unemployable, that there are other things for a human to do than just work.

Documentation  

Dr. Drang:

There seems to be belief among software developers nowadays that providing instructions indicates a failure of design. It isn’t. Providing instructions is a recognition that your users have different backgrounds and different ways of thinking. A feature that’s immediately obvious to User A may be puzzling to User B, and not because User B is an idiot.

You may not believe this, but when the Macintosh first came out everything about the user interface had to be explained.

Agreed. Of course you have to have a properly labeled interface, but that doesn’t mean you can’t have more powerful features explained in documentation. The idea that everything should be “intuitive” is highly toxic to creating powerful software.

Jef Raskin on “Intuitive Interfaces”  

Jef Raskin:

My subject was an intelligent, computer-literate, university-trained teacher visiting from Finland who had not seen a mouse or any advertising or literature about it. With the program running, I pointed to the mouse, said it was “a mouse”, and that one used it to operate the program. Her first act was to lift the mouse and move it about in the air. She discovered the ball on the bottom, held the mouse upside down, and proceeded to turn the ball. However, in this position the ball is not riding on the position pick-offs and it does nothing. After shaking it, and making a number of other attempts at finding a way to use it, she gave up and asked me how it worked. She had never seen anything where you moved the whole object rather than some part of it (like the joysticks she had previously used with computers): it was not intuitive. She also did not intuit that the large raised area on top was a button.

But once I pointed out that the cursor moved when the mouse was moved on the desk’s surface and that the raised area on top was a pressable button, she could immediately use the mouse without another word. The directional mapping of the mouse was “intuitive” because in this regard it operated just like joysticks (to say nothing of pencils) with which she was familiar.

From this and other observations, and a reluctance to accept paranormal claims without repeatable demonstrations thereof, it is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

And

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Nobody knew how to use an iPhone before they saw someone else do it. There’s nothing wrong with more powerful software that requires a user to learn something.

The Wealth of Applications  

Graham Lee:

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

full-stack developer

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

On market demand:

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

I think you can answer this question if you frame most modern software as “entertainment” (or at least, Apps are Websites). It’s certainly not the case that all software is entertainment, but perhaps for the vast majority of people software as they know it is much closer to movies and television than it is to references or mental tools. The only difference is, software has perhaps completed the ultimate wet dream of the entertainment market in that Pop Software doesn’t even really have personalities like music or TV do — the personality is solely that of the brand.

Off and On

I started working on a side project in January 2014 and like many of my side projects over the years, after an initial few months of vigorous work, the last little while has been mostly off and on work on the project.

The typical list of explanations applies: work gets in the way (work has been a perpetual crunch mode for months now), the project has reached a big enough size that it’s hard to make changes (I’m on an unfamiliar platform), and I’m stuck at a particularly difficult problem (I saved the best for last!).

Since the summer has been more or less fruitless while working on this project, I’m taking a different approach going forward, one I’ve used to some success in the past. It comes down to three main things:

  1. Focus the work to one hour per day, usually in the morning. This causes me to get at least something done once every day, even if it’s just small or infrastructure work. I’ve found limiting to a small amount of time (two hours works well too) also forces me to not procrastinate or get distracted while I’m working. The hour of side project becomes precious and not something to waste.

  2. Stop working when you’re in the middle of something so you have somewhere to ramp up with next time you start (I’m pretty sure this one is cribbed directly from Ernest Hemingway).

  3. Keep a diary for your work. I do this with most of my projects by just updating a text file every day after I’m finished working with what my thoughts were for the day. I usually write about what I worked on and what I plan on working on for the next day. This complements step 2 because it lets me see where I left off and what I was planning on doing. It also helps bring any subconscious thoughts about the project into the front of my brain. I’ll usually spend the rest of the day thinking about it, and I’ll be eager to get started again the next day (which helps fuel step 1, because I have lots of ideas and want to stay focused on them — it forces me to work better to get them done).

That, and I’ve set a release date for myself, which should hopefully keep me focused, too.

Here goes.

Transliterature, a Humanist Design  

Ted Nelson:

You have been taught to use Microsoft Word and the World Wide Web as if they were some sort of reality dictated by the universe, immutable “technology” requiring submission and obedience.

But technology, here as elsewhere, masks an ocean of possibilities frozen into a few systems of convention.

Inside the software, it’s all completely arbitrary. Such “technologies” as Email, Microsoft Windows and the World Wide Web were designed by people who thought those things were exactly what we needed. So-called “ICTs"– “Information and Communication Technologies,” like these– did not drop from the skies or the brow of Zeus. Pay attention to the man behind the curtain! Today’s electronic documents were explicitly designed according to technical traditions and tekkie mindset. People, not computers, are forcing hierarchy on us, and perhaps other properties you may not want.

Things could be very different.

JUMP Math: Multiplying Potential (PDF)  

John Mighton:

Research in cognitive science suggests that, while it is important to teach to the strengths of the brain (by allowing students to explore and discover concepts on their own), it is also important to take account of the weaknesses of the brain. Our brains are easily overwhelmed by too much new information, we have limited working memories, we need practice to consolidate skills and concepts, and we learn bigger concepts by first mastering smaller component concepts and skills.

Teachers are often criticized for low test scores and failing schools, but I believe that they are not primarily to blame for these problems. For decades teachers have been required to use textbooks and teaching materials that have not been evaluated in rigorous studies. As well, they have been encouraged to follow many practices that cognitive scientists have now shown are counterproductive. For example, teachers will often select textbooks that are dense with illustrations or concrete materials that have appealing features because they think these materials will make math more relevant or interesting to students. But psychologists such as Jennifer Kaminski have shown that the extraneous information and details in these teaching tools can actually impede learning.

(via @mayli)

Idle Creativity  

Andy Matuschak (in 2009):

When work piles up, my brain doesn’t have any idle cycles. It jumps directly from one task to another, so there’s no background processing. No creativity! And it feels like all the color and life has been sucked out of the world.

I don’t mind being stressed or doing lots of work or losing sleep, but I’ve been noticing that I’m a boring person when it happens!

The Colour and the Shape: Custom Fonts in System UI  

Dave Wiskus:

When the user is in your app, you own the screen.

…Except for the status bar — that’s Helvetica Neue. And share sheets. And Alerts. And in action sheets. Oh, and in the swipe-the-cell UI in iOS 8. In fact any stock UI with text baked in is pretty much going to use Helvetica Neue in red, black, and blue. Hope you like it.

Maybe this is about consistency of experience. Perhaps Apple thinks that people with bad taste will use an unreadable custom font in a UIAlert and confuse users.

I agree with Dave the lack of total control is vexing, but I think that’s because with these system features, alert views and the status bar, Apple wants us to treat them more or less like hardware. They’re immutable, they “come from the OS” like they’re appearing on Official iOS Letterhead paper.

This is why I think Apple doesn’t want us customizing these aspects of iOS. They want to keep the “official” bits as untampered with as possible.

Apps are Websites

The Apple developer community is atwitter this week about independent developers and whether or not they can earn a good living working independently on the Mac and or iOS platforms. It’s a great discussion about an unfortunately bleak topic. It’s sad to hear so many great developers, working on so many great products, are doing so poorly from it. And it seems like a lot of it is mostly out of their control (if I thought I knew a better way, I’d be doing it!). David Smith summarizes most of the discussion (with an awesome list of links):

It has never been easy to make a living (whatever that might mean to you) in the App Store. When the Store was young it may have been somewhat more straightforward to try something and see if it would hit. But it was never “easy”. Most of my failed apps were launched in the first 3 years of the Store. As the Store has matured it has also become a much more efficient marketplace (in the economics sense of market). The little tips and tricks that I used to be able to use to gain an ‘unfair’ advantage now are few and far between.

The basic gist seems to be “it’s nearly impossible to make a living off iOS apps and it’s possible but still pretty hard to do off OS X.” Most of us I think would tend to agree you can charge more for OS X software than you can for iOS because OS X apps are usually “bigger” or more fleshed out, but I think that’s only half the story.

The real reason why it’s so hard to sell iOS apps is that iOS apps are really just websites. Implementation details aside, 95 per cent of people think of iOS apps the same way they think about websites. Websites that most people are exposed to are mostly promotional, ad-laden and most importantly, free. Most people do not pay for websites. A website is just something you visit and use, but it isn’t a piece of software, and this is the exact same way they think of and treat iOS apps. That’s why indie developers are having such a hard time making money.

(Just so we’re clear, I’ve been making iOS apps the whole duration of the App Store and I know damn well iOS apps are not “websites.” I’m well aware they are contained binaries that may-or-may-not use the internet or web services. I’m talking purely about perceptions here)

For a simple test, ask any of your non-developer friends what the difference between an “app” and an “application” or “program” is and I’d be willing to bet they think of them as distinct concepts. To most people, “apps” are only on your phone or tablet, and programs are bigger and on your computer. “Apps” seem to be a wholly different category of software from programs like Word or Photoshop, and the idea that Mac and iOS apps are basically the same on the inside doesn’t really occur to people (nor does it need to, really). People “know” apps aren’t the same thing as programs.

Apps aren’t really “used” so much as they are “checked” (how often do people “check Twitter” vs “use Twitter”?) which is usually a brief “visit” measured in seconds (of, ugh, “engagement”). Most apps are used briefly and fleetingly, just like most websites. iOS, then, isn’t so much an operating system but a browser and the App Store its crappy search engine. Your app is one of limitless other apps just like your website is one of limitless other websites too. The ones people have heard of are promoted and advertised, or the ones in their own niches.

I don’t know how to make money in the App Store, but if I had to I’d try to learn from financially successful websites. I’d charge a subscription and I’d provide value. I’d make an app that did something other than have a “feed” or a “stream” or “shared moments.” I’d make an app that help people create or understand. I’d try new things.

I couldn’t charge $50 for an “app” because apps are perceived as not having that kind of value which I have to agree with (I know firsthand how much works goes in to making an app, but that doesn’t make the app valuable), so maybe we need to create a new category of software on iOS, one that breaks out of the “app” shell (and maybe breaks out of the moniker, too). I don’t know what that entails, but I’m pretty sure that’s what we need.

More Inspiration from Magic Ink and Cortex  

Because I’ll never stop linking to Magic Ink:

The future will be context-sensitive. The future will not be interactive.

Are we preparing for this future? I look around, and see a generation of bright, inventive designers wasting their lives shoehorning obsolete interaction models onto crippled, impotent platforms. I see a generation of engineers wasting their lives mastering the carelessly-designed nuances of these dead-end platforms, and carelessly adding more. I see a generation of users wasting their lives pointing, clicking, dragging, typing, as gigahertz processors spin idly and gigabyte memories remember nothing. I see machines, machines, machines.

Whither Cortex

In my NSNorth 2013 talk, An Educated Guess (which was recorded on video but as of yet has not been published) I gave a demonstration of a programming tool called Cortex and made the rookie mistake of saying it would be on Github “soon.” Seeing as July 2014 is well past “soon,” I thought I’d explain a bit about Cortex and what has happened since the first demonstration.

Cortex is a framework and environment for application programs to autonomously exchange objects without having to know about each other. This means, a Calendar application can ask the Cortex system for objects with a Calendar type and receive a collection of objects with dates. These Calendar objects come from other Cortex-aware applications on the system, like a Movies app, or a restaurant webpage, or a meeting scheduler. The Calendar application knows absolutely nothing about these other applications, all it knows is it wants Calendar objects.

Cortex can be thought of a little bit like Copy and Paste. With Copy and Paste, the user explicitly copies a selected object (like a selection of text, or an image from a drawing application) and then explicitly pastes what they’ve copied into another application (like an email application). In between the copy and paste is the Pasteboard. Cortex is a lot like the Pasteboard, except the user doesn’t explicitly copy or paste anything. Applications themselves either submit objects or request objects.

This, of course, results in quite a lot of objects in the system, so Cortex also has a method of weighing the objects by relevance so nobody is overwhelmed. Applications can also provide their own ways of massaging the objects in the system to create new objects (for example, a “Romantic Date” plugin might lookup objects of the Movie Showing type and the Restaurant type, and return objects of the Romantic Date type to inquiring applications).

If this sounds familiar, it’s because it was largely inspired by part of a Bret Victor paper with lots of other bits mixed in from my research for the NSNorth talk (especially Bush’s Memex and Engelbart’s NLS)).

This is the sort of system I’ve alluded to before on my website (for example, in Two Apps at the Same Time and The Programming Environment is the Programming Language, among others).

Although the system I demonstrated at NSNorth was largely a technical demo, it was nonetheless pretty fully featured and to my delight, was exceptionally well-received by those in the audience. For the rest of the conference, I was approached by many excited developers eager to jump in and get their feet wet. Even those who were skeptical were at least willing to acknowledge, despite its shortcomings, the basic premise of applications sharing data autonomously is a worthwhile endeavour.

And so here I am over a year later with Cortex still locked away in a private repository. I wish I could say I’ve been working on it all this time and it’s ten times more amazing than what I’d originally demoed but that’s just not true. Cortex is largely unchanged and untouched since its original showing last year.

On the plane ride home from NSNorth, I wrote out a to-do list of what needed to be done before I’d feel comfortable releasing Cortex into the wild:

  1. Writing a plugin architecture. The current plan is to have the plugins be normal cocoa plugins which will be run by an XPC process. That way if they crash they won’t bring down the main part of the system. This will mean the generation of objects is done asynchronously, so care will have to be taken here.

  2. A story for debugging Cortex plugins. It’s going to be really tricky to debug these things, and if it’s too hard then people just aren’t going to develop them. So it has to be easy to visualize what’s going on and easy to modify them. This might mean not using straight compiled bundles but instead using something more dynamic. I have to evaluate what that would mean for people distributing their own plugins, if this means they’d always have to be released in source form.

  3. How are Cortex plugins installed? The current thought is to allow for an install message to be sent over the normal cortex protocol (currently http) and either send the plugin that way (from a hosting application) or cause Cortex itself to then download and install the plugin from the web.

    How would it handle uninstalls? How would it handle malicious plugins? It seems like the user is going to have to grant permission for these things.

  4. Relatedly, should there be a permissions system for which apps can get/submit which objects for the system. Maybe we want to do just “read and or write” permissions per application.

The most important issue then, and today, is #2. How are you going to make a Cortex component (something that can create or massage objects) without losing your mind? Applications are hard to make, but they’re even harder to make when we can’t see our data. Since Cortex revolves around data, in order to make anything useful with it, programmers need to be able to see that data. Programmers are smart, but we’re also great at coping with things, with juuuust squeaking by with the smallest amount of functionality. A programmer will build-run-kill-change-repeat an application a thousand times before stopping and taking the time to write a tool to help visualize.

I do no want to promote this kind of development style with Cortex and until I can find a good solution (or be convinced otherwise) I don’t think Cortex would do anything but languish in public. If this sounds like an interesting problem to you, please do get in touch.

Wil Shipley on Automated Testing  

Classic Shipley:

But, seriously, unit testing is teh suck. System testing is teh suck. Structured testing in general is, let’s sing it together, TEH SUCK.

“What?!!” you may ask, incredulously, even though you’re reading this on an LCD screen and it can’t possibly respond to you? “How can I possibly ship a bug-free program and thus make enough money to feed my tribe if I don’t test my shiznit?”

The answer is, you can’t. You should test. Test and test and test. But I’ve NEVER, EVER seen a structured test program that a) didn’t take like 100 man-hours of setup time, b) didn’t suck down a ton of engineering resources, and c) actually found any particularly relevant bugs. Unit testing is a great way to pay a bunch of engineers to be bored out of their minds and find not much of anything. [I know – one of my first jobs was writing unit test code for Lighthouse Design, for the now-president of Sun Microsystems.] You’d be MUCH, MUCH better offer hiring beta testers (or, better yet, offering bug bounties to the general public).

Let me be blunt: YOU NEED TO TEST YOUR DAMN PROGRAM. Run it. Use it. Try odd things. Whack keys. Add too many items. Paste in a 2MB text file. FIND OUT HOW IT FAILS. I’M YELLING BECAUSE THIS SHIT IS IMPORTANT.

Most programmers don’t know how to test their own stuff, and so when they approach testing they approach it using their programming minds: “Oh, if I just write a program to do the testing for me, it’ll save me tons of time and effort.”

There’s only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you’re writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it’s going to be used by people, and (3) It’s actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.

Sing it.

Doomed to Repeat It  

A mostly great article by Paul Ford about the recycling of ideas in our industry:

Did you ever notice, wrote my friend Finn Smith via chat, how often we (meaning programmers) reinvent the same applications? We came up with a quick list: Email, Todo lists, blogging tools, and others. Do you mind if I write this up for Medium?

I think the overall premise is good but I do have thoughts on some of it. First, he claims:

[…] Doug Engelbart’s NLS system of 1968, which pioneered a ton of things—collaborative software, hypertext, the mouse—but deep, deep down was a to-do list manager.

This is a gross misinterpretation of NLS and of Engelbart’s motivations. While the project did birth some “productivity” tools, it was much more a system for collaboration and about Augmenting Human Intellect. A computer scientist not understanding Engelbart’s work would be like a physicist not understanding Isaac Newton’s work.

On to-do lists, I think he gets closest to the real heart of what’s going on (emphasis mine):

The implications of a to-do list are very similar to the implications of software development. A task can be broken into a sequence, each of those items can be executed in turn. Maybe programmers love to do to-do lists because to-do lists are like programs.

I think this is exactly it. This is “the medium is the message” 101. Of course programmers are going to like sequential lists of instructions, it’s what they work in all day long! (Exercise for the reader: what part of a programmer’s job is like email?)

His conclusion is OK but I think misses the bigger cause:

Very little feels as good as organizing all of your latent tasks into a hierarchical lists with checkboxes associated. Doing the work, responding to the emails—these all suck. But organizing it is sweet anticipatory pleasure.

Working is hard, but thinking about working is pretty fun. The result is the software industry.

The real problem is in those very last words, software industry. That’s what we do, we’re an industry but we pretend to be, or at least expect, a field [of computer science]. Like Alan Kay says, computing isn’t really a field but a pop culture.

It’s not that email is broken or productivity tools all suck; it’s just that culture changes. People make email clients or to-do list apps in the same way that theater companies perform Shakespeare plays in modern dress. “Email” is our Hamlet. “To-do apps” are our Tempest.

Culture changes but mostly grows with the past, whereas pop culture takes almost nothing from the past and instead demands the present. Hamlet survives in our culture by being repeatedly performed, but more importantly it survives in our culture because it is studied as a work of art. The word “literacy” doesn’t just mean reading and writing, it also implies having a body of work included and studied by a culture.

Email and to-do apps aren’t cultural in this sense because they aren’t treated by anyone as “great works,” they aren’t revered or built-upon. They are regurgitated from one generation to the next without actually being studied and improved upon. Is it any wonder mail apps of yesterday look so much like those of today?

Step Away from the Kool-Aid  

Ben Howell on startups and compensation:

Don’t, under any circumstances work for less than market rate in order to build other peoples fortunes. Simply don’t do it. Cool product that excites you so in-turn you’ll work for a fraction of the market rate? Call that crap out for what it is. A CEO of a company asking you to help build his fortune while at the same time returning you squat.

String Constants  

Brent Simmons on string constants:

I know that using a string constant is the accepted best practice. And yet it still bugs me a little bit, since it’s an extra level of indirection when I’m writing and reading code. It’s harder to validate correctness when I have to look up each value — it’s easier when I can see with my eyes that the strings are correct.[…]

But I’m exceptional at spotting typos. And I almost never have cause to change the value of a key. (And if I did, it’s not like it’s difficult. Project search works.)

I’m not going to judge Brent here on his solution, but it seems to me like this problem would be much better solved by using string constants provided Xcode actually showed you the damn values of those constants in auto-complete.

When developers resort to crappy hacks like this, it’s a sign of a deficiency in the tools. If you find yourself doing something like this, you shouldn’t resort to tricks, you should say “I know a computer can do this for me” and you should demand it. (rdar://17668209)

Remote Chance

I recently stumbled across an interesting 2004 project called Glancing, whose basic principle is that of replicating the subtle social cues of personal, IRL office relationships like eye contact, nodding, etc. but for people using computers not in the same physical location.

The basic gist (as I understand it) is people, when in person, don’t merely start talking to one another but first have an initial conversation through body language. We glance at each other and try to make eye contact before actually speaking, hoping for the glance to be reciprocated. In this way, we can determine whether or not we should even proceed with the conversation at all, or if maybe the other person is occupied. Matt Webb’s Glancing exists as a way to bridge that gap with a computer (read through his slide notes, they’re detailed but aren’t long). You can look up at your screen and see who else has recently “looked up” too.

Remote work is a tricky problem to solve. We do it occasionally at Hopscotch when working from home, and we’re mostly successful at it, but as a friend of mine recently put it, it’s harder to have a sense of play when experimenting with new features. There is an element of collaboration, of jamming together (in the musical sense) that’s lacking when working over a computer.

Maybe there isn’t really a solution to it and we’re all looking at it the wrong way. Telecommuting has been a topic of research and experimentation for decades and it’s never fully taken off. It’s possible, like Neil Postman suggests in Technopoly that ours is a generation that can’t think of a solution to a problem outside of technology and that maybe this kind of collaboration isn’t compatible with technology. I see that as a possibility.

But I also think there’s a remote chance we’re trying to graft on collaboration as an after-the-fact feature to non-collaborative work environments. I work in Xcode and our designer works in Sketch, and when we collaborate, neither of our respective apps are really much involved. Both apps are designed with a single user in mind. Contrast this with Doug Engelbart and SRI’s NLS system, built from the ground up with multi-person collaboration in mind, and you’ll start to see what I mean.

NLS’s collaboration features seem, in today’s world at least, like screen sharing with multiple cursors. But it extends beyond that, because the whole system was designed to support multiple people using it from the get-go.

How do we define play, how do we jam remotely with software?

What Do We Save When We Save the Internet?  

Ian Bogost in a blistering look at today’s internet and Net Neutrality:

“We believe that a free and open Internet can bring about a better world,” write the authors of the Declaration of Internet Freedom. Its supporters rise up to decry the supposedly imminent demise of this Internet thanks to FCC policies poised to damage Network Neutrality, the notion of common carriage applied to data networks.

Its zealots paint digital Guernicas, lamenting any change in communication policy as atrocity. “If we all want to protect universal access to the communications networks that we all depend on to connect with ideas, information, and each other,” write the admins of Reddit in a blog post patriotically entitled Only YOU Can Protect Net Neutrality, “then we must stand up for our rights to connect and communicate.”

[…]

What is the Internet? As Evgeny Morozov argues, it may not exist except as a rhetorical gimmick. But if it does, it’s as much a thing we do as it is an infrastructure through which to do it. And that thing we do that is the Internet, it’s pockmarked with mortal regret:

You boot a browser and it loads the Yahoo! homepage because that’s what it’s done for fifteen years. You blink at it and type a search term into the Google search field in the chrome of the browser window instead.

Sitting in front of the television, you grasp your iPhone tight in your hand instead of your knitting or your whiskey or your rosary or your lover.

The shame of expecting an immediate reply to a text or a Gchat message after just having failed to provide one. The narcissism of urgency.

The pull-snap of a timeline update on a smartphone screen, the spin of its rotary gauge. The feeling of relief at the surge of new data—in Gmail, in Twitter, in Instagram, it doesn’t matter.

The gentle settling of disappointment that follows, like a down duvet sighing into the freshly made bed. This moment is just like the last, and the next.

You close Facebook and then open a new browser tab, in which you immediately navigate back to Facebook without thinking.

The web is a brittle place, corrupted by advertising and tracking (see also “Is the Web Really Free?”). I won’t spoil the ending but I’m at least willing to agree with his conclusion.

Seymour Papert: Situating Constructionism  

Seymour Papert and Idit Harel in an introduction to their book, discussing ways of approaching learning:

But the story I really want to tell is not about test scores. It is not even about the math/Logo class. (3) It is about the art room I used to pass on the way. For a while, I dropped in periodically to watch students working on soap sculptures and mused about ways in which this was not like a math class. In the math class students are generally given little problems which they solve or don’t solve pretty well on the fly. In this particular art class they were all carving soap, but what each students carved came from wherever fancy is bred and the project was not done and dropped but continued for many weeks. It allowed time to think, to dream, to gaze, to get a new idea and try it and drop it or persist, time to talk, to see other people’s work and their reaction to yours–not unlike mathematics as it is for the mathematician, but quite unlike math as it is in junior high school. I remember craving some of the students’ work and learning that their art teacher and their families had first choice. I was struck by an incongruous image of the teacher in a regular math class pining to own the products of his students’ work! An ambition was born: I want junior high school math class to be like that. I didn’t know exactly what “that” meant but I knew I wanted it. I didn’t even know what to call the idea. For a long time it existed in my head as “soap-sculpture math.”

It’s beginning to seem to me like constructionist learning is great, but also that we need many different approaches to learning, like atoms oscillating, so that the harmonics of learning can better emerge.

They were using this high-tech and actively computational material as an expressive medium; the content came from their imaginations as freely as what the others expressed in soap. But where a knife was used to shape the soap, mathematics was used here to shape the behavior of the snake and physics to figure out its structure. Fantasy and science and math were coming together, uneasily still, but pointing a way. LEGO/Logo is limited as a build-an-animal-kit; versions under development in our lab will have little computers to put inside the snake and perhaps linear activators which will be more like muscles in their mode of action. Some members of our group have other ideas: Rather than using a tiny computer, using even tinier logic gates and motors with gears may be fine. Well, we have to explore these routes (4). But what is important is the vision being pursued and the questions being asked. Which approach best melds science and fantasy? Which favors dreams and visions and sets off trains of good scientific and mathematical ideas?

I think the biggest problem still faced by Logo is (like Smalltalk) its success. Logo is highly revered as an educational language, so much so that its methods are generally accepted as “good enough” and not readily challenged. The unfortunate truth is twofold:

  1. In order for Logo to be successful as a general creative medium for learning, there are many other factors which must also be worked on, such as teacher/school acceptance (this is of course no easy feat and no fault of Logo’s designers, it’s just an unfortunate truth. Papert discusses it somewhat in The Children’s Machine).

  2. Logo just hasn’t taken the world by storm. Obviously these things take time, but the implicit assumption seems to be “Logo is done, now the world needs to catch up to it.”

“Good enough” tends to lead us down paths prematurely, when instead we should be pushing further. That’s why most programming languages look like Smalltalk and C. Those languages worked marvelously for their original goals, but they’re far from being the pinnacle of possibility. If Logo were invented today, what could it look like today (*future-referencing an ironic project of mine*)?

Computer-aided instruction may seem to refer to method rather than content, but what counts as a change in method depends on what one sees as the essential features of the existing methods. From my perspective, CAI amplifies the rote and authoritarian character that many critics see as manifestations of what is most characteristic of–and most wrong with–traditional school. Computer literacy and CAI, or indeed the use of word-processors, could conceivably set up waves that will change school, but in themselves they constitute very local innovations–fairly described as placing computers in a possibly improved but essentially unchanged school. The presence of computers begins to go beyond first impact when it alters the nature of the learning process; for example, if it shifts the balance between transfer of knowledge to students (whether via book, teacher, or tutorial program is essentially irrelevant) and the production of knowledge by students. It will have really gone beyond it if computers play a part in mediating a change in the criteria that govern what kinds of knowledge are valued in education.

This is perhaps the most damning and troublesome facet of computers for their use in pushing humans forward. Computers are so good at simulating old media that it’s essentially all we do with them. Doing old media is easy, as we don’t have to learn any new skills. We’ve evolved to go with the familiar, but I think it’s time we dip our toes into something a little beyond.

MIT Invents A Shapeshifting Display You Can Reach Through And Touch  

First just let me say the work done by the group is fantastic and a great step towards a dynamic physical medium, much like how graphical displays are dynamic visual media. This is an important problem.

What I find troubling, however, is the notion that this sort of technology should be used to mimic the wrong things:

But what really interests the Tangible Media Group is the transformable UIs of the future. As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts.

Buttons and knobs! Have we learned nothing from our time with dynamic visuals? Graphical buttons and other “controls” on a computer screen already act like some kind of steampunk interface. We’ve got buttons and sliders and knobs and levers, most of which are not appropriate from computer tasks but which we do because we’re stuck in a mechanical mindset. If we’re lucky enough to be blessed with a dynamic physical interface, why should we similarly squander it?

Hands are super sensitive and super expressive (read John Napier’s book about them and think about how you hold it as you read). They can form powerful or gentle grips and they can switch between them almost instantly. They can manipulate and sense pressure, texture, and temperature. They can write novels and play symphonies and make tacos. Why would we want our dynamic physical medium to focus on anything less?

(via @daveverwer)

I Tell You What I’d Do: Two Apps at the Same Time

Today’s iOS-related rumour is about iOS 8 having some kind of split screen functionality. From 9to5Mac:

In addition to allowing for two iPad apps to be used at the same time, the feature is designed to allow for apps to more easily interact, according to the sources. For example, a user may be able to drag content, such as text, video, or images, from one app to another. Apple is said to be developing capabilities for developers to be able to design their apps to interact with each other. This functionality may mean that Apple is finally ready to enable “XPC” support in iOS (or improved inter-app communication), which means that developers could design App Store apps that could share content.

Although I have no sources of my own, I wouldn’t bet against Mark Gurman for having good intel on this. It seems likely that this is real, but I think it might end up being a misunderstanding of problems users are actually trying to solve.

It’s pretty well-known most users have struggled with the “windowed-applications” interface paradigm, where there can be multiple, overlapping windows on screen at once. Many users get lost in the windows and end up devoting too much time to managing the windows than actually getting to work. So iOS is mostly a pretty great step forward in this regard. Having two “windows” of apps open at once would be a step back to the difficulties found on the desktop. And just because the windows on iOS 8 might not overlap, there’s still two different apps to multitask with — something else pretty well known to cause strife in people.

Having multiple windows seems like a kind of “faster horse,” a way to just repurpose the “old way” of doing something instead of trying to actually solve the problem users are having. In this case, the whole impetus for showing multiple windows or “dragging and dropping between apps” is to share information between applications.

Users writing an email might want details from a website, map, or restaurant app. Users trying to IM somebody might want to share something they’ve just seen or made in another app. Writers might want to refer to links or page contents from a Wikipedia app. These sorts of problems can all be solved by juxtaposing app windows side by side, but to me it seems like a cop-out.

A better solution would be to share the data between applications, through some kind of system service. Instead of drag and drop, or copy and paste (both are essentially the same thing), objects are implicitly shared across the system. If you are looking at a restaurant in one app, then switch to a maps app, that map should show the restaurant (along with any other object you’ve recently seen with a location). When you head to your calendar, it should show potential mealtimes (with the contact you’re emailing with, of course).

This sort of “interaction” requires thinking about the problem a little differently, but it’s advantageous because it ends up skipping most of the interaction users actually have to do in the first place. Users don’t need to drag and drop, they don’t need to copy and paste, and they don’t need to manage windows. They don’t need to be overloaded with information of seeing too many apps on screen at once.

I’ve previously talked about this, and my work on this problem is largely inspired by a section in Magic Ink. It’s sort of a “take an object; leave an object” kind of system, where applications can send objects to the system service, and others can request objects from the system (and of course, applications can provide feedback as to which objects should be shown and which should be ignored).

I don’t expect Apple to do this in iOS 8, but I do hope somebody will consider it.

Legible Mathematics  

Absolutely stunning and thought-provoking essay on a new interface for math as a method of experimenting with new interfaces for programming.

“Amusing Ourselves to Death”  

While I’m telling you what to do, I think everyone should read Neil Postman’s “Amusing Ourselves to Death.” From Wikipedia:

The essential premise of the book, which Postman extends to the rest of his argument(s), is that “form excludes the content,” that is, a particular medium can only sustain a particular level of ideas. Thus Rational argument, integral to print typography, is militated against by the medium of television for the aforesaid reason. Owing to this shortcoming, politics and religion are diluted, and “news of the day” becomes a packaged commodity. Television de-emphasises the quality of information in favour of satisfying the far-reaching needs of entertainment, by which information is encumbered and to which it is subordinate.

America was formed as, and made possible by, a literate society, a society of readers, when presidential debates took five hours. But television (and other electronic media) erode many of the modes in which we (i.e., the world, not just America) think.

If you work in media (and software developers, software is very much a medium) then you have a responsibility to read and understand this book. Your local library should have a copy, too.

The Shallows by Nicholas Carr  

Related to the previous post, I recently read Nicholas Carr’s “The Shallows” and I can’t recommend it enough. From the publisher:

As we enjoy the Net’s bounties, are we sacrificing our ability to read and think deeply?

Now, Carr expands his argument into the most compelling exploration of the Internet’s intellectual and cultural consequences yet published. As he describes how human thought has been shaped through the centuries by “tools of the mind”—from the alphabet to maps, to the printing press, the clock, and the computer—Carr interweaves a fascinating account of recent discoveries in neuroscience by such pioneers as Michael Merzenich and Eric Kandel. Our brains, the historical and scientific evidence reveals, change in response to our experiences. The technologies we use to find, store, and share information can literally reroute our neural pathways.

It’s a well-researched book about how the computers — and the internet in general — physically alter our brains and cause us to think differently. In this case, we think more shallowly because we’re continuously zipping around links and websites, and we can’t focus as well as we could when we were a more literate society. Deep reading goes out the browser window, as it were.

You should read it.

A Sheer Torment of Links  

Riccardo Mori:

In other words, people don’t seem to stay or at least willing to explore more when they arrive on a blog they probably never saw before. I’m surprised, and not because I’m so vain to think I’m that charismatic as to retain 90% of new visitors, but by the general lack of curiosity. I can understand that not all the people who followed MacStories’ link to my site had to like it or agree with me. What I don’t understand is the behaviour of who liked what they saw. Why not return, why not decide to keep an eye on my site?

I’ve thought a lot about this sort of thing basically the whole time I’ve been running Speed Of Light (just over four years now, FYI) and although I don’t consider myself to be any kind of great writer, I’ve always been a little surprised by the lack of traffic the site gets, even after some articles getting linked from major publications.

On any given day, a typical reader of my site will probably see a ton of links from Twitter, Facebook, an RSS feed, or a link site they read. Even if the content on any of those websites is amazing, a reader probably isn’t going to spend too much time hanging around, because there are forty or fifty other links for them to see today.

This is why nobody sticks around. This is why readers bounce. It’s why we have shorter, more superficial articles instead of deep essays. It’s why we have tl;dr. The torrent of links becomes a torment of links because we won’t and can’t stay on one thing for too long.

And it also poses moral issues for writers (or for me, at least). I know there’s a deluge, and every single thing I publish on this website contributes to that. But the catch is the way to get more avid readers these days is to publish copiously. The more you publish, the more people read, the more links you get, the more people become subscribers. What are we to do?

I don’t have a huge number of readers, but those who do read the site I respect tremendously. I’d rather have fewer, but more thoughtful readers who really care about what I write, than more readers who visit because I post frequent-but-lower-quality articles. I’d rather write long form, well-researched, thoughtful essays than entertaining posts. I know most won’t sit through more than three paragraphs but those aren’t the readers I’m after, anyway.