Speed of Light
Mobile (apparently) Isn’t Killing the Desktop  

Who knows, maybe I was too quick to jump the gun:

According to data from comScore, for example, the overall time spent online with desktop devices in the U.S. has remained relatively stable for the past two years. Time spent with mobile devices has grown rapidly in that time, but the numbers suggest mobile use is adding to desktop use, not subtracting from it.

(The article just repeats that paragraph about five times).

About the Futures

Another thing I wanted to mention about thinking about the future is that: it’s complicated. When it comes to the future we rarely really have the future; instead what we have are a bunch of different parts, like people or places or things or circumstances, etc., all relating and working together in a complex way. From this lens, “the future” really represents the state of a big ol’ system of everything we know. Lovely, isn’t it?

So if you want to “invent the future” you have to understand your actions, inventions, etc. do not exist in a vacuum (unless you work at Dyson), but instead will have an interplay with everything.

And this helps direct a lot of what’s possible without really great efforts. If I wanted to make my own smartphone, with the intention of making the number one smartphone in the world, I can’t really do that. Today, I can’t make a better smartphone than Apple. There are really only a few major players who can play in this game with a possibility of being the best.

I’m not trying to be defeatist here, but I am trying to point out you can’t just invent the future you want in all cases. The trick is to be aware of the kind of future you want. You can’t win at the smartphone game today. Apple itself couldn’t even be founded today. If you’re trying to compete with things like these then you have to realize you’re not really inventing the future but you’re inventing the present instead.

Be mindful of the systems that exist, or else you’ll be inventing tomorrow’s yesterday.

Setting My Ways

I’ve heard the possibly apocryphal advice that a person’s twenties are very important years in their life because those are the years when you effectively get “set in your ways,” when you develop the habits you’ll be living with for the rest of your life. As I’m a person who’s about to turn 27 whole years old, this has been on my mind lately. I’ve seen lots of people in their older years who are clearly set in their ways or their beliefs, who have no real inclination to change. What works for them will always continue to work for them.

If this is an inevitable part of aging, then I want to set myself in the best ways, with the best habits to carry me forward. Most of these habits will be personal, like taking good care of my health, keeping my body in good shape, and taking good care of my mind. But I think the most important habit for me is to always keep my mind learning, always changing and open to new ideas. That’s the most important habit I think I can develop over the next few years.

Keeping with this theme, I want to keep my mind open with technology as well. Already am I feeling it’s too easy to say “No, I’m not interested in that topic because how I do things works for me already,” mostly when it comes to technical things (I am loathe to use Auto Layout or write software for the Apple Watch). I don’t like to update my operating systems until I can no longer use any apps, because what I use is often less buggy than the newest release.

These habits I’m less concerned about because newer OS releases and newer APIs in many ways seem like a step backwards to me (although I realize this might just be a sign of me already set in my ways!). I’m more concerned about the way I perceive software in a more abstract way.

To me, “software” has always been something that runs as an app or a website, on a computer with a keyboard and mouse. As a longtime user of and developer for smartphones, I know software runs quite well on these devices as well, but it always feels subpar to me. In my mind, it’s hard to do “serious” kinds of work. I know iPhones and iPads can be used for creation and “serious” work, but I also know doing the same tasks typically done on a desktop are much more arduous on a touch screen.

Logically, I know this is a dead end. I know many people are growing up with smartphones as their only computer. I know desktops will seem ancient to them. I know in many countries, desktop computers are almost non-existent. I know there are people writing 3000 word school essays and I know these sorts of things will only increase over time. But it defies my common sense.

There are all kinds of ideas foreign to my common sense coming out of mobile software these days. Many popular apps in China exist in some kind of text messaging / chat user interface and messaging in general is changing the way people are interacting with the companies and services providing their software in other places, too. Snapchat is a service I haven’t even tried to understand, but they are rethinking how videos work on a mobile phone, while John Herrman at The Awl describes a potential app-less near-future.

As long as I hold on to my beliefs that software exists as an app or website on a device with a keyboard and mouse, I’m doomed to living in a world left behind.

I’ve seen it happen to people I respect, too. I love the concept of Smalltalk (and I’ll make smalltalk about Smalltalk to anyone who’ll listen) but I can’t help but feel it’s a technological ideal for a world that no longer exists. In some ways, it feels like we’ve missed the boat on using a computer as a powerful means of expression instead what we got is a convenient means of entertainment.

My point isn’t really about any particular trend. My point is to remind myself that what “software” is is probably always going to remain in flux, tightly related to things like social change or the way of the markets. Software evolves and changes over time, but that evolution doesn’t necessarily line up with progress, it’s just different.

Alan Kay said the best way to predict the future is to invent it. But I think you need to understand that the future’s going to be different, first.

Stalking Your Friends with Facebook Messenger  

Great research by Aran Khanna:

As you may know, when you send a message from the Messenger app there is an option to send your location with it. What I realized was that almost every other message in my chats had a location attached to it, so I decided to have some fun with this data. I wrote a Chrome extension for the Facebook Messenger page (https://www.facebook.com/messages/) that scrapes all this location data and plots it on a map. You can get this extension here and play around with it on your message data. […]

This means that if a few people who I am chatting with separately collude and send each other the locations I share with them, they would be able to track me very accurately without me ever knowing.

Even if you know how invasive Facebook is, this still seems shocking. Why?

These sorts of things always seem shocking because we don’t usually see them in the aggregate. Sending your location one message at a time seems OK, but it’s not until you see all the data together that it becomes scary.

We’re used to seeing data moment to moment, as if looking through a pinhole. We’re oblivious to the larger systems at work, and it’s not until we step up the ladder of abstraction do we start to see a bigger picture.

Say what you will of the privacy issues inherent in this discovery, but I think the bigger problem is our collective inability to understand and defend ourselves from these sorts of systems.

The Plot Against Trains  

Adam Gopnik:

What is less apparent, perhaps, is that the will to abandon the public way is not some failure of understanding, or some nearsighted omission by shortsighted politicians. It is part of a coherent ideological project. As I wrote a few years ago, in a piece on the literature of American declinism, “The reason we don’t have beautiful new airports and efficient bullet trains is not that we have inadvertently stumbled upon stumbling blocks; it’s that there are considerable numbers of Americans for whom these things are simply symbols of a feared central government, and who would, when they travel, rather sweat in squalor than surrender the money to build a better terminal.” The ideological rigor of this idea, as absolute in its way as the ancient Soviet conviction that any entering wedge of free enterprise would lead to the destruction of the Soviet state, is as instructive as it is astonishing. And it is part of the folly of American “centrism” not to recognize that the failure to run trains where we need them is made from conviction, not from ignorance.

Yeah, what good has the American government ever done for America?

The Web and HTTP  

John Gruber repeating his argument that because mobile apps usually use HTTP, that’s still the web:

I’ve been making this point for years, but it remains highly controversial. HTML/CSS/JavaScript rendered in a web browser — that part of the web has peaked. Running servers and client apps that speak HTTP(S) — that part of the web continues to grow and thrive.

But I call bullshit. HTTP is not is not what gives the web its webiness. Sure, it’s a part of the web stack, but so is TCP/IP. The web could have been implemented over any number of protocols and it wouldn’t have made a big difference.

What makes the web the web is the open connections between documents or “apps,” the fact that anybody can participate on a mostly-agreed-upon playing field. Things like Facebook Instant Articles or even Apple’s App Store are closed up, do not allow participation by every person or every idea, and don’t really act like a “web” at all. And they could have easily been built on FTP or somesuch and it wouldn’t make a lick of difference.

It may well be the “browser web” John talks about has peaked, but I think it’s incorrect to say the web is still growing because apps are using HTTP.

The Likely Cause of Addiction Has Been Discovered, and It Is Not What You Think  

Johann Hari:

One of the ways this theory was first established is through rat experiments – ones that were injected into the American psyche in the 1980s, in a famous advert by the Partnership for a Drug-Free America. You may remember it. The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.

The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”

But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?

The Best and Worst Places to Grow Up: How Your Area Compares  

New dynamic article by Gregor Aisch, Eric Buth, Matthew Bloch, Amanda Cox and Kevin Quealy for the New York Times about income and geography:

Consider Brooklyn, our best guess for where you might be reading this article. (Feel free to change to another place by selecting a new county on the map or using the search boxes throughout this page.)

The page guesses where you live (at least within America, I don’t know about the rest of the world) and updates its content dynamically to reflect that. The article is what’s best for the reader.

More like this please.

The Eternal Return of BuzzFeed  

Adrienne LaFrance and Robinson Meyer on BuzzFeed and the modern history of media empires:

BuzzFeed is a successful company. And it is not only that: BuzzFeed is the rare example of a news organization that changes the way the news industry works. While it may not turn the largest profits or get the biggest scoops, it is shaping how other organizations sell ads, hire employees, and approach their work. BuzzFeed is the most influential news organization in America today because the Internet is the most influential medium—and, in some crucial ways, BuzzFeed demonstrates an understanding of that medium better than anyone else. […]

Time’s success sprang from a content innovation matched with a keen bet on demography. Its target audience was the average Fitzgerald protagonist, or, at least, his classmate. “No publication has adapted itself to the time which busy men are able to spend on simply keeping informed,” wrote the magazine’s two founders in a manifesto. It was for this audience, too, that the magazine mixed its reports on global affairs with briefs on culture, fashion, business, and politics. The overall feel of a Time issue was a feeling of omniscience: “Now, you, young man of industry, know it all.”

“Just For Girls?” How Gender Divisions Trickle Down  

Chuck Wendig on DC Comics’ new girl superheroes:

All the toxicity between the gender divide? It starts here. It starts when they’re kids. It begins when you say, “LOOK, THERE’S THE GIRL STUFF FOR THE GIRLS OVER THERE, AND THE BOY STUFF FOR THE BOYS OVER HERE.” And then you hand them their pink hairbrushes and blue guns and you tell your sons, “You can’t play with the pink hairbrush because GIRL GERMS yucky ew you’re not weird are you, those germs might make you a girl,” and then when the boy wants to play with the hairbrush anyway, he does and gets his ass kicked on the bus and gets called names like sissy or pussy or some homophobic epithet because parents told their kids that girl stuff is for girls only, which basically makes the boy a girl. And the parents got that lesson from the companies that made the hairbrush because nowhere on the packaging would it ever show a boy brushing hair or a girl brushing a boy’s hair. And on the packaging of that blue gun is boys, boys, boys, grr, men, war, no way would girls touch this stuff. Duh! Girls aren’t boys! No guns for you. […]

Now, this runs the risk of sounding like the plaintive wails of a MAN SPURNED, wherein I weep into the open air, “WHAT ABOUT ME, WHAT ABOUT US POOR MENS,” and that’s not my point, I swear. I don’t want DC or the toy companies to cater to my boy. I just don’t want him excluded from learning about and dealing with girls. I want society to expect him to actually learn about girls and be allowed to like them — not as romantic targets later in life, but as like, awesome ass-kicking complicated equals. As real people who are among him rather than separate from him.

More like this please.

Do Not Disturb

Daring Fireball recently linked to a piece by Steven Levy on “The Age of Notifications,” where Levy describes our current state/spate of notifications and the Apple Watch:

This was delivered to me in the standard message format, no different than a New York Times alert informing me a building two blocks from my apartment has exploded, or an iChat message that my sister is desperately trying to reach me. Please note that I am not a blood relative of B.J. — sorry, Melvin — Upton, nor am I even a fan of the Atlanta Braves. In other words…this could have waited. Nonetheless, MLB.com At Bat apparently deemed this important enough to broadcast to hundreds of thousand of users who had earlier clicked, with hardly a second thought, on a dialogue box asking if they wanted to receive notifications from Major League Baseball. No matter what these users were doing — enduring a meeting, playing basketball, presenting to a book club, daydreaming, watching a movie, enjoying a family meal, painting their masterpiece, proposing marriage, interviewing a job candidate, having sex, or any combination thereof — the news of The Melvin Renaming (the next Robert Ludlum novel?) penetrated their individual radars, urging them to Look at me! Now! Even if they kept the phone stashed, the simple fact that there was an alert burrowed in their brains, keeping them just a little off balance until they finally picked up the phone to discover what the buzz was about.

The Melvin Renaming was just one interruption among billions in what now is unquestionably the Age of Notifications. As our reliance on electronically delivered information has increased, the cascade of brief urgent pointers to that information has been funneled into our devices, lighting our lock screens with these brief dispatches. Rarely does an app neglect to ask you to opt-in to these messages. Most often — since you see the dialogue box when you are entering your honeymoon stage with the app, just after consummation — you say yes. […]

So what’s the solution? We need a great artificial intelligence effort to comb through our information, assess the urgency and relevance, and use a deep knowledge of who we are and what we think is important to deliver the right notifications at the right time. As time goes on, we will trust such a system to effectively filter all our information and dole it out just as needed.

Gruber adds:

I think he’s on to something here: some sort of AI for filtering notification does seem useful. I can imagine helping it by being able to give (a) a thumbs-down to a notification that went through to your watch that you didn’t want to see there; and (b) a thumbs-up to a notification on your phone or PC that wasn’t filtered through to your more personal devices but which you wish had been.

But: this sounds too much like spam filtering to me. True spam is unasked-for. Notifications are all things for which you explicitly opted in, and can opt out of at any moment.

First of all, I think it sounds effectively like spam filtering because these notifications are effectively like spam. Although we technically opt in to them, we’re often coerced into doing so. As Levy said in the quoted passage, we’re often asked at a time when we’re feeling good about the app (after first downloading it, or after accomplishing a task; yes, developers opportunistically pop these up to get more people to agree to them). App developers know when is best to get you to agree, and they know notifications are an effective communication channel for “engaging” (i.e., advertising to) you.

These notifications are kind of like junk food. They’re delicious but dangerous. A little bit is fine, but too much is bad for you. While you can say junk food junkies are “opting in” to eating the unhealthy food, are they really making a choice? Or is the food literally irresistible to them?

Secondly, if this recent interview in Wired is to be believed, a deluge of notifications is one of the primary motivations for the development of the Apple Watch. Am I expected to pay $350+ in order to cut the annoyances of my $600+ iPhone? Wouldn’t it just be simpler to turn off the notifications (i.e., all of them) instead of throwing more technology on the problem?

We shouldn’t have to force (or shame) people into some false sense of virtuosity (“she’s so extreme, she doesn’t allow any notifications!”) just so they’re not constantly disturbed by buzzes and animating notifications.

Start Thinking About Force Touch in iOS Today

It seems very likely Apple’s Force Touch technology (with its sister Taptic feedback engine) will come to a future iPhone, possibly whichever iPhone launches in the Fall of 2015. Like the recently launched MacBooks, the new iPhone will probably include APIs for your apps to take advantage of.

I’m imploring you to start thinking right now, today, about how you’re going to use these APIs in your applications.

So it goes

When Apple adds a new system-wide API to iOS, here’s how it usually goes: everybody adds some minor feature to their app thoughtlessly using the new API and the API becomes overused or misused.

Let’s look at Notifications. There are so many apps using notifications that shouldn’t be. Apps notify you about likes and comments. Apps notify you about downloads starting and downloads ending. Apps beg you to come back and use them more. Notifications, which were intended to notify you about important things instead have become a way for apps to shamelessly advertise themselves at their own whim.

Let’s look at a less nefarious feature: Sharing. Apple introduced the “Sharing” features in iOS 7: a common interface for sharing app content to social networks. This feature is used everywhere. Your browser has it, your social apps have it, your games have it, your programming environments have it.

Another example, let’s look at Air Drop: a feature designed to shared data between devices. This feature is used in all kinds of apps it shouldn’t be, like the New York Times app. How many apps have Today extensions? How many badge their icons? How many ask for your location or show a map?

The point of the above examples isn’t to argue the moral validity of their API use, but instead that these APIs are introduced by Apple, then app developers scramble to find ways to use these features in their apps, whether or not it really makes sense to do so. App developers may occasionally do so because it’s an important feature for their application, but often it seems developers use the APIs because Apple is more likely to promote apps using them or because the developers just think it’s neato.

This is something I’d like to avoid with Force touch APIs.

Force Touch

If we look to Apple for examples on how to use Force Touch in our applications, their usage has been pretty tame and uninspired so far. Most uses on their Force Touch page for the MacBook use Force Touch as a way of bringing up a contextual menu or view. For the “Force Click” feature, Apple describes features like:

looking up the definition of a word, previewing a file in the Finder, or creating a new Calendar event when you Force click a date in the text of an email.

You can do better in your apps. One way to think about force click is to think of it as an analogy for hovering on desktop computers (if I had my druthers, we’d use today’s “touch” as a hover gesture and we’d use force click as the “tap” or action gesture). Force click and hover are a little different, of course, and it’s your job to pay attention to these differences. Force click is less about skimming and more about confirming (again, my druthers and touch states!). How can your applications more powerfully let people explore and see information?

I wouldn’t look at hover functionality and just literally translate it using force click, but I would look at the kinds of interactions both can afford you. Hover can show tooltips, sure, but it can also be an ambient way to graze information. Look at how one skims an album in iPhoto (RIP) to see its photos at a glance. Look at how hovering over any data point in this visualization highlights related data (the data itself isn’t important, it’s to illustrate a usage of hover).

Pressure sensitivity as an input mechanism is a little more straightforward. You’ll presumably get continuous input in the range of 0 to 1 telling you how hard a finger is pressed and you react accordingly. Apple gives the example of varying pen thickness, but what else can you do? I’d recommend looking to video games for inspiration as they’ve been using this form of analog input for decades. Any game using a joystick or pressable shoulder triggers is a good place to start. Think about continuous things (pan gestures, sure, but also how your whole body moves, how you breathe, how you live) and things with a range (temperature, size, scale, sentiment, and, well, pressure). How can you use these in tandem with the aforementioned “hovering” scenarios?

If you want to get a head start on prototyping interactions, you can cheat by either programming on one of the new MacBooks, or you can use a new iOS 8 API on UITouch called majorRadius. This gives you an approximation of how “big” a touch was, which you can use as a rough estimate of “how hard” a finger was pressed (this probably isn’t reliable enough to ship an app with, but you can likely get a somewhat rough sense of how your interactions could work in a true pressure-sensitive environment).

Not every app probably needs Force touch or click, but that probably won’t stop people from abusing it in Twitter and photo sharing apps. If you really care about properly using these new forms of interaction, then start thinking about how to do it right, today. There is decades-worth of research and papers about this topic. Think about why hands are important. Read, think, design, and prototype. These devices are probably coming sooner than we think, so we should start thinking now on how to set a high bar for future interaction. Don’t relegate this feature to thoughtless context menus, use it as a way to add more discrete and explorable control to the information in your software.

Ai Weiwei is Living in Our Future  

A terrifying, despotic essay by Hans de Zwart about the science-non-fiction world we’re living in.

The algorithms for facial recognition are getting better every day too. In another recent news story we heard how Neil Stammer, a juggler who had been on the run for 14 years, was finally caught. How did they catch him? An agent who was testing a piece of software for detecting passport fraud, decided to try his luck by using the facial recognition module of the software on the FBI’s collection of ‘Wanted’ posters. Neil’s picture matched the passport photo of somebody with a different name. That’s how they found Neil, who had been living as an English teacher in Nepal for many years. Apparently the algorithm has no problems matching a 14 year old picture with a picture taken today. Although it is great that they’ve managed to arrest somebody who is suspected of child abuse, it is worrying that it doesn’t seem like there are any safeguards making sure that a random American agent can’t use the database of pictures of suspects to test a piece of software.

Knowing this, it should come as no surprise that we have learned from the Snowden leaks that the National Security Agency (NSA) stores pictures at a massive scale and tries to find faces inside of them. Their ‘Wellspring’ program checks emails and other pieces of communication and shows them when it thinks there is a passport photo inside of them. One of the technologies the NSA uses for this feat is made by Pittsburgh Pattern Recognition (‘PittPatt’), now owned by Google. We underestimate how much a company like Google is already part of the military industrial complex. I therefore can’t resist showing a new piece of Google technology: the military robot ‘WildCat’ made by Boston Dynamics which was bought by Google in December 2013 […]

It is not only the government who is following us and trying to influence our behavior. In fact, it is the standard business model of the Internet. Our behaviour on the Internet is nearly always mediated by a third party. Facebook and WhatsApp sit between you and your best friend, Spotify sits between you and Beyoncé, Netflix sits between you and Breaking Bad and Amazon sits between you and however many Shades of Grey. The biggest commercial intermediary is Google who by now decides, among other things how I walk from the station to the theatre, in which way I will treat the symptoms of my cold, whether an email I’ve sent to somebody else should be marked as spam, where best I can book a hotel, and whether or not I have an appointment next week Thursday. […]

The casinos were the first industry to embrace the use of AEDs (automatic defibrillators). Before they started using them, the ambulance staff was usually too late whenever somebody had a heart attack: they are only allowed to use the back entrance (who will enter a casino when there is an ambulance in front of the door?) and casinos are purposefully designed so that you easily lose your way. Dow Schüll describes how she is with a salesperson for AEDs looking at a video of somebody getting a heart attack behind a slot machine. The man falls off his stool onto the person sitting next to him. That person leans to the side a little so that the man can continue his way to the ground and plays on. While the security staff is sounding the alarm and starts working the AED, there is literally nobody who is looking up from their gambling machine, everybody just continues playing.

This sort of reminds me of the feeling I often have when people around me are busy with Facebook on their phone. The feeling that it makes no difference what I do to get the person’s attention, that all of their attention is captured by Facebook. We shouldn’t be surprised by that. Facebook is very much like a virtual casino abusing the same cognitive weaknesses as the real casinos. The Facebook user is seen as an ‘asset’ of which the ‘time on service’ has to be made a long as possible, so that the ‘user productivity’ is as high as possible. Facebook is a machine that seduces you to keep clicking on the ‘like’ button.

It’s not just Facebook, either. I feel that way about almost all smartphone apps and social networks, too. Your attention is the currency.

Are You Working in a Start-up or Are You in Jail?  

Niree Noel:

Do you need security clearance to enter and exit the facility?

Are you surrounded by windows that don’t open?

Are you dressed the same as everyone else? Using the same stuff as everyone else? At about the same time?

I’m Brianna Wu, And I’m Risking My Life Standing Up To Gamergate  

Brianna Wu:

This weekend, a man wearing a skull mask posted a video on YouTube outlining his plans to murder me. I know his real name. I documented it and sent it to law enforcement, praying something is finally done. I have received these death threats and 43 others in the last five months.

This experience is the basis of a Law & Order episode airing Wednesday called the “Intimidation Game.” I gave in and watched the preview today. The main character appears to be an amalgamation of me, Zoe Quinn, and Anita Sarkeesian, three of the primary targets of the hate group called GamerGate.

My name is Brianna Wu. I develop video games for your phone. I lead one of the largest professional game-development teams of women in the field. Sometimes I speak out on women in tech issues. I’m doing everything I can to save my life except be silent.

The week before last, I went to court to file a restraining order against a man who calls himself “The Commander.” He made a video holding up a knife, explaining how he’ll murder me “Assassin’s Creed Style.” He wrecked his car en route to my house to “deliver justice.” In logs that leaked, he claimed to have weapons and a compatriot to do a drive-by.

Awful, disturbing stuff.

I’ll also remind you you don’t have to be making death and rape threats to be a part of sexism in tech. The hatred of women has got to stop.

See also the “top stories” at the bottom of essay. This representation of women only obsessed with their looks is pretty toxic to both men and women, too.

The Dynabook and the App Store

Yesterday I linked to J. Vincent Toups’ 2011 Duckspeak Vs Smalltalk, an essay about how far, or really how little, we’ve come since Alan Kay’s Dynabook concept, and a critique of the limitations inherent in today’s App Store style computing.

A frequent reaction to this line of thought is “we shouldn’t make everyone be a programmer just to use a computer.” In fact, after Loren Brichter shared the link on Twitter, there were many such reactions. While I absolutely agree abstractions are a good thing (e.g., you shouldn’t have to understand how electricity works in order to turn on a light), one of the problems with computers and App Stores today is we don’t even have the option of knowing how the software works even if we wanted.

But the bigger problem is what our conception of programming is today. When the Alto computer was being researched at Xerox, nobody was expecting people to program like we do today. Javascript, Objective C, and Swift (along with all the other “modern” languages today) are pitiful languages for thinking, and were designed instead for managing computer resources (Javascript, for example, was thoughtlessly cobbled together in just ten days). The reaction of “people shouldn’t have to program to use a computer” hinges on what it means to program, and what software developers think of programming is vastly different from what the researchers at Xerox had in mind.

Programming, according to Alan Kay and the gang, was a way for people to be empowered by computers. Alan correctly recognized the computer as a dynamic medium (the “dyna” in “Dynabook") and deemed it crucial people be literate with this medium. Literacy, you’ll recall, means being able to read and write in a medium, to be able to think critically and reason with a literature of great works (that’s the “book” in “Dynabook"). The App Store method of software essentially neuters the medium into a one-way consumption device. Yes, you can create on an iPad, but the system’s design language does not allow for creation of dynamic media.

Nobody is expecting people to have to program a computer in order to use it, but the PARC philosophy has at its core a symmetric concept of creation as well as consumption. Not only are all the parts of Smalltalk accessible to any person, but all the parts are live, responsive, active objects. When you need to send a live, interactive model to your colleague or your student, you sent the model, not an attachment, not a video or a picture, but the real live object. When you need to do an intricate task, you don’t use disparate “apps” and pray the developers have somehow enabled data sharing between, but you actually combine the parts yourself. That’s the inherent power in the PARC model that we’ve completely eschewed in modern operating systems.

Smalltalk and the Alto were far from perfect, and I’ll be the last to suggest we use them as is. But I will suggest we understand the philosophy and the desires to empower people with computers and use that understanding to build better systems. I’d highly recommend reading Alan’s Early History of Smalltalk and A Personal Computer for Children of All Ages to learn what the personal computer was really intended to be.

Language in Everyday Life  

Ash Furrow on discriminating language:

Recently, I’ve been examining the language I use in the context of determining if I may be inadvertently hurting anyone. For instance, using “insane” or “crazy” as synonyms for “unbelievable” probably doesn’t make people suffering from mental illness feel great. […]

Pretty straightforward. There are terms out there that are offensive to people who identify as members of groups that those terms describe. The terms are offensive primarily because they connote negativity beyond the meaning of the word. […]

To me, the bottom-line is that these words are hurtful and there are semantically identical synonyms to use in their place, so there is no reason to continue to use them. Using the terms is hurtful and continuing to use them when you know they’re hurtful is kind of a dick move.

Ash published this article in October 2014 and it’s been on my mind ever since. It makes total sense to me, and I’ve been trying hard to remove these words from my vocabulary. It takes time, especially considering how pervasive they are, but it’s important. If you substitute the word “magical” in for any of the bad words, it makes your sentences pretty delightful, and shows how banal the original words really are, as we over-use them anyway.

The Decline of the Xerox PARC Philosophy at Apple Computers  

J. Vincent Toups on the Dynabook:

While the Dynabook was meant to be a device deeply rooted in the ethos of active education and human enhancement, the iDevices are essentially glorified entertainment and social interaction (and tracking) devices, and Apple controlled revenue stream generators for developers. The entire “App Store” model, then works to divide the world into developers and software users, whereas the Xerox PARC philosophy was for there to be a continuum between these two states. The Dynabook’s design was meant to recruit the user into the system as a fully active participant. The iDevice is meant to show you things, and to accept a limited kind of input - useful for 250 character Tweets and Facebook status updates, all without giving you the power to upset Content Creators, upon whom Apple depends for its business model. Smalltalk was created with the education of adolescents in mind - the iPad thinks of this group as a market segment. […]

It is interesting that at one point, Jobs (who could not be reached for comment [note: this was written before Jobs’ death]) described his vision of computers as “interpersonal computing,” and by that standard, his machines are a success. It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured. In a world where centralized technology like Google can literally give you a good guess at any piece of human knowledge in milliseconds, its a real tragedy that the immense power of cheap, freely available computational systems remains locked behind opaque interfaces, obscure programming languages, and expensive licensing agreements.

The article is also great because it helps dispel the myth that Apple took “Xerox’s rough unfinished UI and polished it for the Mac.” It’s closer to the truth to say Apple dramatically stripped the Smalltalk interface of its functionality that resulted in a toy, albeit cheaper, personal computer.

Why I Just Asked My Students To Put Their Laptops Away  

Renowned internet media proponent and NYU professor, Clay Shirky:

Over the years, I’ve noticed that when I do have a specific reason to ask everyone to set aside their devices (‘Lids down’, in the parlance of my department), it’s as if someone has let fresh air into the room. The conversation brightens, and more recently, there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.

So this year, I moved from recommending setting aside laptops and phones to requiring it, adding this to the class rules: “Stay focused. (No devices in class, unless the assignment requires it.)” Here’s why I finally switched from ‘allowed unless by request’ to ‘banned unless required’. […]

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals. […]

Computers are not inherent sources of distraction — they can in fact be powerful engines of focus — but latter-day versions have been designed to be, because attention is the substance which makes the whole consumer internet go.

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose. […]

The “Nearby Peers” effect, though, shreds that rationale. There is no laissez-faire attitude to take when the degradation of focus is social. Allowing laptop use in class is like allowing boombox use in class — it lets each person choose whether to degrade the experience of those around them.

Explorable Explanations  

Nicky Case on explorable explanations:

This weekend, I attended a small 20-person workshop on figuring out how to use interactivity to help teach concepts. I don’t mean glorified flash cards or clicking through a slideshow. I mean stuff like my 2D lighting tutorial, or this geology-simulation textbook, or this explorable explanation on explorable explanations.

Over the course of the weekend workshop, we collected a bunch of design patterns and considerations, which I’ve made crappy diagrams of, as seen below. Note: this was originally written for members of the workshop, so there’s a lot of external references that you might not get if you weren’t there.

This is great because it’s full of examples about why and when and what to make things explorable.

Design & Compromise  

Mills Baker:

In ten years, when America’s health care system is still a hideous, tragic mess, Republicans will believe that this is due to the faulty premises of Democratic legislation, while Democrats will believe that the legislation was fatally weakened by obstinate Republicans. While we can of course reason our way to our own hypotheses, we will lack a truly irrefutable conclusion, the sort we now have about, say, whether the sun revolves around the earth.

Thus: a real effect of compromise is that it prevents intact ideas from being tested and falsified. Instead, ideas are blended with their antitheses into policies that are “no one’s idea of what will work,” allowing the perpetual political regurgitation, reinterpretation, and relational stasis that defines the governance of the United States.

Baker goes on to detail how compromise results in similarly poor designs and argues in favour of the auteur. Compromise can often be interpreted like racial colourblindness, where instead each voice needs to shine of its own merits.

Worse still, in my experience, compromise is often a negative feedback loop: it’s difficult to convince an organization it should stop compromising, when it can only agree to things by compromising. It’s poison in the institutional well.

Bret Victor’s “The Humane Representation of Thought”

Right before Christmas, Bret Victor released his newest talk, “The Humane Representation of Thought”, summing up the current state of his long reaching and far seeing visionary research. I’ve got lots of happy-but-muddled thoughts about it, but suffice it to say, I loved it. If you like his work, you’ll love this talk.

He also published some notes on the talk about the thinking that led to the talk.

Shortly after the talk was published, Wired did a story about him and his work, although nothing too in depth, offers a nice overview of the talk and his work.

Finally, somehow I missed this profile of Bret and his research by John Pavlus published in early December.

On Greatness  

Glen Chiacchieri on the Greats:

There seem to be a few unifying components of greatness. The first is willingness to work hard. No one has become great by surfing the internet. Anyone who you would consider great has most likely achieved their status through sheer hard work, not necessarily their genius. In fact, their genius probably came after the fact, as a result of their work, rather than through any latent brilliance that was lurking beneath the surface, ready to be sprung upon the world. […]

Great people don’t just sit around thinking, either; they’re creating. Without creating something, there would be no sign of their greatness. They would be another cog in the mass machine, churning ideas. They’d be addicted to brain crack. But they’re not. Great people make things: books, songs, blogonet posts, videos, programs, stories, websites. Their greatness is in their creations. Their creations show signs of the hard work and unique genius of the person making it.

I’ll add something sort of parallel to what Glen is saying here: Great people make their work better and more clearly by doing a lot of it.

I’m not particularly after greatness (it’s fine if you are), but I’ve struggled a lot in the recent years with ideas I “know” be true in my mind, but that I can’t easily express clearly either in written form or by speaking. That’s a problem, because whatever potential greatness there might be, nobody else can understand it (that, and it’s probably just actually less great because it isn’t clear, even to me, yet).

Great people who create a lot hone their skills of expressing and refining their ideas.

It’s a Shame

Textual programming languages are mostly devoid of structure — any structure that exists actually exists in our heads — and we have to many organize our code around that (modern editors offer us syntax highlighting and auto-indentation, which help, but we still have to maintain the structure mentally).

New programmers often write “spaghetti code” or don’t use variables or don’t use functions. We teach them about these features but they often don’t make the connection on why they’re useful. How many times have you seen a beginner programmer say “Why would I use that?” after you explain to him or her a new feature? Actually, how many times have you seen the opposite? I’ll bet almost never.

I have a hunch this is sort of related to why development communities often shame their members so much. There are a lot of mean spirits around. There’s a lot of “Why the hell would you ever use X?? Don’t you know about Y? Or why X is so terrible? It’s bad and you should feel bad.” We have way too much of that, and my hunch is this is (partly) related to our programming environments being entirely structureless.

When the medium you work in doesn’t actually encourage powerful ways of getting the work done, the community fills in the gaps by shaming its members into “not doing it wrong.”

I could be wrong, and even if I’m right, I’m not saying this is excusable. But I do think we’re quite well known for being a shame culture, and I think we do that in order to keep our heads from exploding. We shame until we believe, and we believe until we understand. Perhaps our environments should help us understand better in the first place, and we can leave the shaming behind.

Citation Needed  

Mike Hoye on why most programming languages use zero-indexed arrays:

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.[…]

I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.[…]

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

The masculine mistake  

What is so broken inside American men? Why do we make so many spaces unsafe for women? Why do we demand that they smile as we harass them - and why, when women bring the reality of their everyday experiences into the open, do we threaten to kill them for it?

If you’re a man reading this, you likely feel defensive by now. I’m not one of those guys, you might be telling yourself. Not all men are like that. But actually, what if they are? And what if men like you telling yourselves that you’re not part of the problem is itself part of the problem?

We’ve all seen the video by now. “Smile,” says the man, uncomfortably close. And then, more angrily, “Smile!”

An actress, Shoshana Roberts, spends a day walking through New York streets, surreptitiously recorded by a camera. Dozens of men accost her; they comment on her appearance and demand that she respond to their “compliments.” […]

This is a huge problem. And unfortunately, it’s but one symptom of a larger issue.

Why do men do this? How can men walk down the same streets as women, attend the same schools, play the same games, live in the same homes, be part of the same families - yet either not realize or not care how hellish we make women’s lives?

One possible answer: Straight American masculinity is fundamentally broken. Our culture socializes young men to believe that they are entitled to sexual attention from women, and that women go about their lives with that as their primary purpose - as opposed to just being other people, with their own plans, priorities and desires.

We teach men to see women as objects, not other human beings. Their bodies are things men are entitled to: to judge, to assess, and to dispose of - in other words, to treat as pornographic playthings, to have access to and, if the women resist, to threaten, to destroy.

We raise young boys to believe that if they are not successful at receiving sexual attention from women, then they are failures as men. Bullying is merciless in our culture, and is heaped upon geeky boys by other young men in particular (and all the more so against boys who do not appear straight).

But because young men are taught to despise vulnerability, in themselves and in others, they instead turn that hatred upon those who are already more vulnerable - women and others - with added intensity. Put differently, and without in any way excusing their monstrous behavior, young men are given unrealistic expectations, taught to hate themselves when reality falls short - and then to blame women for the whole thing.

I’m reminded of this excellent and positive TED talk about a need to give boys better stories. We need more stories where “the guy doesn’t get the girl in the end, and he’s OK with that.” We need to teach boys this is a good outcome, that boys aren’t entitled to girls.

If you’re shopping for presents for boys this Christmas, I implore you to keep this in mind. Don’t buy them a story of a prince or a hero who “gets the girl.”

An Educated Guess, the Video  

At long last, the video of my conference talk from NSNorth 2013 wherein I unveiled the Cortex system:

Modern software development isn’t all that modern. Its origins are rooted in the original Macintosh, an era and environment lacking networking, slow processors with limited memory, and almost no collaboration between developers or the processes they wrote. Today, software is developed as though these constraints still remain.

We need a modern approach to building better software. Not just incremental improvements but fundamental leaps forward. This talk presents frameworks at the sociological, conceptual, and programmatic levels to rethink how software should be made, enabling a giant leap to better software.

I haven’t been able to bring myself to watch myself talk yet. Watch it and tell me how it went?