Speed of Light
The masculine mistake  

What is so broken inside American men? Why do we make so many spaces unsafe for women? Why do we demand that they smile as we harass them - and why, when women bring the reality of their everyday experiences into the open, do we threaten to kill them for it?

If you’re a man reading this, you likely feel defensive by now. I’m not one of those guys, you might be telling yourself. Not all men are like that. But actually, what if they are? And what if men like you telling yourselves that you’re not part of the problem is itself part of the problem?

We’ve all seen the video by now. “Smile,” says the man, uncomfortably close. And then, more angrily, “Smile!”

An actress, Shoshana Roberts, spends a day walking through New York streets, surreptitiously recorded by a camera. Dozens of men accost her; they comment on her appearance and demand that she respond to their “compliments.” […]

This is a huge problem. And unfortunately, it’s but one symptom of a larger issue.

Why do men do this? How can men walk down the same streets as women, attend the same schools, play the same games, live in the same homes, be part of the same families - yet either not realize or not care how hellish we make women’s lives?

One possible answer: Straight American masculinity is fundamentally broken. Our culture socializes young men to believe that they are entitled to sexual attention from women, and that women go about their lives with that as their primary purpose - as opposed to just being other people, with their own plans, priorities and desires.

We teach men to see women as objects, not other human beings. Their bodies are things men are entitled to: to judge, to assess, and to dispose of - in other words, to treat as pornographic playthings, to have access to and, if the women resist, to threaten, to destroy.

We raise young boys to believe that if they are not successful at receiving sexual attention from women, then they are failures as men. Bullying is merciless in our culture, and is heaped upon geeky boys by other young men in particular (and all the more so against boys who do not appear straight).

But because young men are taught to despise vulnerability, in themselves and in others, they instead turn that hatred upon those who are already more vulnerable - women and others - with added intensity. Put differently, and without in any way excusing their monstrous behavior, young men are given unrealistic expectations, taught to hate themselves when reality falls short - and then to blame women for the whole thing.

I’m reminded of this excellent and positive TED talk about a need to give boys better stories. We need more stories where “the guy doesn’t get the girl in the end, and he’s OK with that.” We need to teach boys this is a good outcome, that boys aren’t entitled to girls.

If you’re shopping for presents for boys this Christmas, I implore you to keep this in mind. Don’t buy them a story of a prince or a hero who “gets the girl.”

An Educated Guess, the Video  

At long last, the video of my conference talk from NSNorth 2013 wherein I unveiled the Cortex system:

Modern software development isn’t all that modern. Its origins are rooted in the original Macintosh, an era and environment lacking networking, slow processors with limited memory, and almost no collaboration between developers or the processes they wrote. Today, software is developed as though these constraints still remain.

We need a modern approach to building better software. Not just incremental improvements but fundamental leaps forward. This talk presents frameworks at the sociological, conceptual, and programmatic levels to rethink how software should be made, enabling a giant leap to better software.

I haven’t been able to bring myself to watch myself talk yet. Watch it and tell me how it went?

Goodbye, Twitter  

Geoff Pado:

It finally hit me: the way I felt wasn’t “people on Twitter are jerks,” it was “people are jerks on Twitter.” After this epiphany, and a brief hiatus to see if I could even break my own habits, I’ve made my decision: I’m getting off of Twitter, effective immediately. A link to this blog post will be my final tweet, and I’m only going to watch for replies until Tuesday. As part of my hiatus, I’ve already deleted all my Twitter apps from all my devices, and I’ll be scrambling my password on Tuesday to even prevent myself from logging in without going through the “forgot my password” hoops.

Sounds good to me.

Christmasmania

It’s November 23, 2014. In Brooklyn, New York, it’s getting colder as we inch closer to Winter. The leaves are still falling, but the snow isn’t. Nor is there any snow on the ground. But if you listen closely, you can hear a disturbing sound.

It’s thirty-two days until Christmas, and the grocery stores are already playing Christmas music. The streets are already decorated, and Starbucks has all the ornaments and “Holiday Flavors” out in full swing. It’s a week before American Thanksgiving.

This is Christmasmania.

Let’s look at Christmasmania for a moment. We’ve started celebrating a holiday that comes once a year thirty-two days before it’s arrived. We’ll likely celebrate it for a week after the day, too. That’s almost forty days of Christmas, every year. Let’s look at this another way.

Conservatively, let’s say we spend one month per year in Christmasmania. One month per year is one twelfth of a year. Let’s pretend we live in a land of Christmasmania where instead of spending one month of the year, one twelfth of a year devoted to the “holiday spirit”, we instead spent two hours (2/24 hours = 1/12 of a day) of every single day of the year in Christmasmania.

Every single day, between the hours of 6 and 8 PM, families don their yuletide sweaters, pour each other cups of eggnog, and listen to a few hours of Christmas carols. They’ll spend a few minutes shopping for that perfect gift, they’ll spend a few minutes wrapping it, and they’ll keep it under the tree for half an hour or so. The kids will watch Youtube clips of Rudolf and how the Grinch Stole Christmas. And maybe if they’re good, the kids will get to open a present before being sent off to bed, to have visions of sugarplums dance in their heads.

Two hours of Christmasmania. Every day.

Here’s the really insidious thing about Christmasmania. It’s not that the decorations go up during Halloween. It’s not that Starbucks has eggnog flavoured napkins before Remembrance and Veterans’ Day. It’s not that the same garbage Christmas songs are recycled and re-recorded by the pop-royalty-du-jour and pumped out of every shopping centre speaker before Americans even have a chance to be thankful. It’s not the over commercialized nature of “finding the perfect gift for that special someone.”

No, what’s really insidious about Christmasmania is how self-perpetuating and reinforcing it is. For the Christmasmania virus to survive, it must take control of its host, but not kill its host.

Christmasmania, also known as “the Holiday Spirit,” requires its hosts to keep one another in line. Every single of the numerous Christmas movies (of which Christmasmania dictates we watch at least a few) has at least one social outcast, the “grinch”, who simply does not like Christmas. We are taught to despise this grinch, to pity this grinch, and to rehabilitate the grinch so that he or she can see the “true meaning of Christmas” and get into the “Holiday Spirit.” “If you don’t like Christmas,” the mania tells us, “there’s something wrong with you, because nothing can be wrong with Christmas. Do you like giving?” Don’t you like shopping?

I think Christmas can be a wonderful celebration, a special time to be close with your family and loved ones you might otherwise not get throughout the rest of the year, and that’s a great thing. But the problem is when we as a whole are programmed and forced to buy in to the mania that surrounds it, the celebration becomes lost in a morass of stop-motion candy-cane flavoured Bing Crosby songs. So this Christmas remember your loved ones. They’re the real present.

Alan Kay’s Commentary on “A Personal Computer For Children Of All Ages”  

So many great gems in here:

The next year I visited Seymour Papert, Wally Feurzig, and Cynthia Solomon to see the LOGO classroom experience in the Lexington schools. This was a revelation! And was much more important to me than the metaphors of “tools” and “vehicles” that were central to the ARPA way of characterizing its vision. This was more like the “environment of powerful epistemology” of Montessori, the “environment of media” of McLuhan, and even more striking: it evoked the invention of the printing press and all that it brought. This was not just “augmenting human intellect”, but the “early shaping of human intellect”. This was a “cosmic service idea”. […]

At this first brush, the service model was: facilitate children “learning the world by constructing it” via an interactive graphical interface to an “object- oriented-simulation-oriented-LOGO-like-language.

A few years later at Xerox PARC I wrote “A Personal Computer For Children Of All Ages”. This was written mostly to start exploring in more depth the desirable services that should be offered. I.e. what should a Dynabook enable? And why should it enable it?

The first context was “everything that ARPA envisioned for adults but in a form that children could also learn and use”. The analogy here was to normal language learning in which children are not given a special “children’s language” but pick up speaking, reading and writing their native language directly through subsets of both the content and the language. In practice for the Dynabook, this required inventing better languages and user interfaces for adults that could also be used for children (this is because most of the paraphernalia for adults in those days was substandard for all). […]

Back then, it was in the context that “education” meant much more than just competing for jobs, or with the Soviet Union; how well “real education” could be accomplished was the very foundation of how well a democratic federal republic could carry out its original ideals.

[Thomas] Jefferson’s key idea was that a general population that has learned to think and has acquired enough knowledge will be able to dynamically steer the “ship of state” through the sometimes rough waters of the future and its controversies (and conversely, that the republic will fail if the general population is not sufficiently educated).

An important part of this vision was that the object of education was not to produce a single point of view, but to produce citizens who could carry out the processes of reconciling different points of view.

If most Americans today were asked “why education?”, it’s a safe bet that most would say “to help get a good job” or to “help make the US more competitive worldwide” (a favorite of our recent Presidents). Most would not mention the societal goal of growing children into adults who will be “enlightened enough to exercise their control with a wholesome discretion” or to understand that they are the “true corrective of abuses of … power”.

Goldieblox Introduces an Action Figure for Girls  

Goldieblox:

Research shows that girls who play with fashion dolls see fewer career options for themselves than boys (see study) . One fashion doll is sold every three seconds. Girls’ feet are made for high-tops, not high heels…it’s time for change.

Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaawesome.

Thoughts About Teaching Science and Mathematics To Young Children [PDF]  

Mind-opening thoughts about early education of math and science by Alan Kay:

Scientists escape to a large extent from simple belief by having done enough real experimentation, modeling building using mathematics that suggests new experiments, etc., to realize that science is more like map-making for real navigators than bible-making: IOW, the maps need to be as accurate as possible with annotations for errors and kinds of measurements, done by competent map-makers rather than story tellers, and they are always subject to improvement and rediscovery: they never completely represent the territory they are trying to map, etc.

Many of us who having been learning how to help children become scientists (that is to be able to think and act as scientists some of the time) have gathered evidence which shows that helping children actually do real science at the earliest possible ages is the best known way to help them move from simple beliefs in dogma to the more skeptical, empirically derived models of science.[…]

There is abundant evidence that helping children move from human built-in heuristics and the commonsense of their local culture to the “uncommonsense” and heuristic thinking of science, math, etc., is best done at the earliest possible ages. This presents many difficulties ranging from understanding how young children think to the very real problem that “the younger the children, the more adept need to be their mentors (and the opposite is more often the case)”.[…]

So, for young and youngish children (say from 4 to 12) we still have a whole world of design problems. For one thing, this is not an homogenous group. Cognitively and kinesthetically it is at least two groups (and three groupings is an even better fit). So, we really think of three specially designed and constructed environments here, where each should have graceful ramps into the next one.

The current thresholds exclude many designs, but more than one kind of design could serve. If several designs could be found that serve, then we have a chance to see if the thresholds can be raised. This is why we encourage others to try their own comprehensive environments for children. Most of the historical progress in this area has come from a number of groups using each other’s ideas to make better attempts (this is a lot like the way any science is supposed to work). One of the difficulties today is that many of the attempts over the last 15 or so years have been done with too low a sense of threshold and thus start to clog and confuse the real issues.

I think one of the trickiest issues in this kind of design is an analogy to the learning of science itself, and that is “how much should the learners/users have to do by themselves vs. how much should the curriculum/system do for them?” Most computer users have been mostly exposed to “productivity tools” in which as many things as possible have been done for them. The kinds of educational environments we are talking about here are at their best when the learner does the important parts by themselves, and any black or translucent boxes serve only on the side and not at the center of the learning. What is the center and what is the side will shift as the learning progresses, and this has to be accommodated.

OTOH, the extreme build it from scratch approach is not the best way for most minds, especially young ones. The best way seems to be to pick the areas that need to be from scratch and do the best job possible to make all difficulties be important ones whose overcoming is the whole point of the educational process (this is in direct analogy to how sports and music are taught – the desire is to facilitate a real change for the better, and this can be honestly difficult for the learner).

The Fantasy and Abuse of the Manipulable User  

Some quotes from Betsy Haibel’s must read essay. If you make or use software, you should read it.

Deceptive linking practices – from big flashing “download now” buttons hovering above actual download links, to disguising links to advertising by making them indistinguishable from content links – may not initially seem like violations of user consent. However, consent must be informed to be meaningful – and “consent” obtained by deception is not consent.

Consent-challenging approaches offer potential competitive benefits. Deceptive links capture clicks – so the linking site gets paid. Harvesting of emails through automatic opt-in aids in marketing and lead generation. While the actual corporate gain from not allowing unsubscribes is likely minimal – users who want to opt out are generally not good conversion targets – individuals and departments with quotas to meet will cheer the artificial boost to their mailing list size.

These perceived and actual competitive advantages have led to violations of consent being codified as best practices, rendering them nigh-invisible to most tech workers. It’s understandable – it seems almost hyperbolic to characterize “unwanted email” as a moral issue. Still, challenges to boundaries are challenges to boundaries. If we treat unwanted emails, or accidentally clicked advertising links, as too small a deal to bother, then we’re asserting that we know better than our users what their boundaries are. In other words, we’re placing ourselves in the arbiter-of-boundaries role which abuse culture assigns to “society as a whole.”[…]

The industry’s widespread individual challenges to user boundaries become a collective assertion of the right to challenge – that is, to perform actions which are known to transgress people’s internally set or externally stated boundaries. The competitive advantage, perceived or actual, of boundary violation turns from an “advantage” over the competition into a requirement for keeping up with them.

Individual choices to not fall behind in the arms race of user mistreatment collectively become the deliberate and disingenuous cover story of “but everyone’s doing it.”[…]

The hacker mythos has long been driven by a narrow notion of “meritocracy.” Hacker meritocracy, like all “meritocracies,” reinscribes systems of oppression by victim-blaming those who aren’t allowed to succeed within it, or gain the skills it values. Hacker meritocracy casts non-technical skills as irrelevant, and punishes those who lack technical skills. Having “technical merit” becomes a requirement to defend oneself online. […]

It’s easy to bash Zynga and other manufacturers of cow clickers and Bejeweled clones. However, the mainstream tech industry has baked similar compulsion-generating practices into its largest platforms. There’s very little psychological difference between the positive-reinforcement rat pellet of a Candy Crush win and that of new content in one’s Facebook stream.[…]

I call on my fellow users of technology to actively resist this pervasive boundary violation. Social platforms are not fully responsive to user protest, but they do respond, and the existence of actual or potential user outcry gives ethical tech workers a lever in internal fights about user abuse.

Facebook Rooms  

Inspired by both the ethos of these early web communities and the capabilities of modern smartphones, today we’re announcing Rooms, the latest app from Facebook Creative Labs. Rooms lets you create places for the things you’re into, and invite others who are into them too.[…]

Not only are rooms dedicated to whatever you want, room creators can also control almost everything else about them. Rooms is designed to be a flexible, creative tool. You can change the text and emoji on your like button, add a cover photo and dominant colors, create custom “pinned” messages, customize member permissions, and even set whether or not people can link to your content on the web. In the future, we’ll continue to add more customizable features and ways to tweak your room. The Rooms team is committed to building tools that let you create your perfect place. Our job is to empower you.

My guess is Rooms is a more strategic move to try and attract teens who seek privacy in apps like Snapchat. Seems like a good place for a clique.

When Women Stopped Coding  

Steve Henn for NPR:

A lot of computing pioneers — the people who programmed the first digital computers — were women. And for decades, the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. The percentage of women in computer science flattened, and then plunged, even as the share of women in other technical and professional fields kept rising.

What happened?

This is something Hopscotch is trying to change.

Swift for Beginners

Last week I published a little essay about Swift, imploring iOS developers to start learning Swift today:

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift.

In last week’s iOS Dev Weekly Dave Verwer astutely pointed out, however, this advice is really aimed at experienced iOS developers:

Jason talks mainly about experienced iOS developers in this article but I believe it’s a whole different argument for those who are just getting started.

The intersection of programming languages and learning programming happens to be precisely my line of work, so I thought I’d offer some advice for beginner iOS developers and Swift.

Should you learn Swift or Objective C?

For newcomers to the iOS development platform, I reckon the number one question asked is “Should I learn with Objective C or with Swift?” Contrary to what some may say (e.g., Big Nerd Ranch), I suggest you start learning with Swift first.

The main reason for this argument is cruft: Swift doesn’t have much and Objective C has a whole bunch. Swift’s syntax is much cleaner than Objective C which means a beginner won’t get bogged down with unnecessary details that would otherwise trip them up (e.g., header files, pointer syntax, etc.).

I used to teach a beginners iOS Development course and while most learners could grasp the core concepts easily, they were often tripped up by the implementation details of Objective C oozing out the seams. “Do I need a semicolon here? Why do I need to copy and paste this method declaration? Why don’t ints need a pointer star? Why do strings need an @ sign?” The list goes on.

When you’re learning a new platform and a new language, you have enough of an uphill battle without having to deal with the problems of a 1980s programming language.

In place of Objective C’s header files, importing, and declaring of methods, Swift has just one file with a single declaration and implementation of methods, with no need to import files within the same module. There goes all that complexity right out the window. In place of Objective C’s pointer syntax, in Swift both reference and value types use the same syntax.

Learning Xcode and Cocoa and iOS development all at once is a monumental task, but if you learn it with Swift first you’ll have a much easier time taking it all in.

Swift is ultimately a bigger language than Objective C, with features like advanced enums, Generics/Templates, tuples, operator overloading, etc. There is more Swift to learn but Cocoa was written in Objective C and it doesn’t make use of these features, so they’re not as essential for doing iOS development today. It’s likely that in the coming years Cocoa will adopt more Swift language features, so it’s still good to be familiar with them, but the fact is learning a core amount of Swift is much more straightforward than learning a core amount of Objective C.

A Note about Learning Programming with Swift

I should point out I’m not necessarily advocating for learning Swift as your first programming language, however, but instead suggesting if you’re a developer who’s new to iOS development, you should start with Swift.

If you’re new to programming, there are many better languages for learning, like Lisp, Logo, or Ruby to name just a few. You may very well be able to cut your teeth learning programming with Swift, but it’s not designed as a learning language and has a programming mental model of the “you are a programmer managing computer resources” kind.

Learning Objective C

You should start out learning iOS development with Swift, but once you become comfortable, you should learn Objective C too.

Objective C has been the programming language for iOS since its inception, so there’s lots of it out there in the real world, including books, blog posts, and other projects and frameworks. It’s important to know how to read and write Objective C, but the good news is once you’ve become decent with Swift, programming with Objective C isn’t much of a stretch.

Although their syntaxes differ in some superficial ways, the kind of code you write is largely the same between the two. -viewDidLoad and viewDidLoad() may be implemented in different syntaxes, but what you’re trying to accomplish is basically the same in either case.

The difficult part about learning Objective C after learning Swift, then, is not learning Cocoa and its concepts but instead the earlier mentioned syntactic salt that comes with the language. Because you already know a bit about view controllers and gesture recognizers, you’ll have a much easier time figuring out the oddities of a less modern syntax than you would have if you tried to learn them both at the same time. It’s much easier to adapt this way.

Learn Swift

Perhaps the biggest endorsement for learning Swift comes from Apple:

Swift is a successor to the C and Objective-C languages.

It doesn’t get much clearer than that.

Patterns to Help You Destroy Massive View Controller  

Soroush Khanlou:

View controllers become gargantuan because they’re doing too many things. Keyboard management, user input, data transformation, view allocation — which of these is really the purview of the view controller? Which should be delegated to other objects? In this post, we’ll explore isolating each of these responsiblities into its own object. This will help us sequester bits of complex code, and make our code more readable.

Hopscotch and Mental Models

On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.

What follows is an examination of mental models, and the models used in various versions of Hopscotch.

Mental models

The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.

A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.

The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).

There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.

You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.

One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Jef Raskin’s The Humane Interface for a thorough discussion on ZUIs).

These spatial metaphors can be thought of as gestures in the Raskinian sense of the term (defined as “…an action that can be done automatically by the body as soon as the brain ‘gives the command’. So Cmd+Z is a gesture, as is typing the word ‘brain’”) where instead of acting, the digital space provides a common, habitual environment for performing actions. There is no Raskinian mode switch because the person already has familiarity with the space.

Hopscotch 1.x

Following in the footsteps of the Logo turtle, Hopscotch characters are programmed in the same egocentric mental model (here’s a video of programming one character in Hopscotch 1.0). If I want Bear to move in a circle, I first ponder how I would move in a circle and translate this to Hopscotch blocks. If this were all to the story, this mental model would be pretty sufficient. But Hopscotch projects can have multiple programmed characters executing at the same time. Logo’s model works well because it’s clear there is one turtle to one programmer, but when there are multiple characters to take care of, it’s conceptually more of a stretch to program them all this way.

Hopscotch 1.0 was split diametrically between the drag and drop code blocks for various the various characters in your project and the Stage, the area where your program executes and your characters wiggle their butts off, as directed. This division is quite similar to the “write code / execute program” model most programming environments provide developers, but that doesn’t mean it’s appropriate (for children or professionals). Though the characters were tangible (tappable!) on the Stage, they remained abstract in the code editor. Simply put, there wasn’t a strong connection between your code and your program. This discord made it very difficult for beginners to connect their code to their characters.

Hopscotch 2.x

In the redesign, we unified the Stage-as-player with the Stage-as-Editor. In the original version, you programmed your characters by switching between tabs of code, but in the redesign you see your characters as they appear on the Stage. Gone are the two distinct modes, instead you just have the Stage, which you can edit. This means you no longer position characters with a small graph, but instead pick up your characters and place them directly.

The code blocks, which used to live in the tabs now lives inside the characters themselves. This gives a stronger mental model of “My character knows how to draw a circle because I programmed her directly”. When you tap a character you see a list of “Rules” appear as thought bubbles beside the character. Rules are mini-programs for each character that are played for different events (e.g., “When the iPad is tilted, make Bear dance") and you edit their code by tapping into them. This concept attaches the abstract concept of “code” to the very spatial and tangible characters you’re trying to program, and we found beginners could grasp this concept much quicker than the original model.

Along the way, we added “little things” like custom functions and a mini code preview that highlights code blocks as it executes, to let programmers quickly see the results of their changes for the character they’re programming. These aren’t additions to the mental model per se, but they do help close the gap between abstract code and your characters following your program.

Our redesign got a lot of love. We were featured by Apple, Recode, and even the Grubers gave us some effusive love. It wasn’t perfect, and we’ve continued to work on the design in the intervening months, but it’s been a big improvement.

A mental framework

A strong mental model benefits the people using your software because it helps them meet both ways. But mental models also help you as a designer to understand the messages you send through your application. By rethinking our mental model for Hopscotch, we dramatically improved both how we build the program and how people use it, and it’s given us a framework to think in for the future. As you build or use applications, be aware of the signals you send and receive and it will help you understand the software better.

Swift

After playing with Swift in my spare time for most of the Summer and after now using Swift full time at Hopscotch for about a month now, I thought I’d share some of my thoughts on the language.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift. I would suggest switching to it full time for all new iOS work (I wouldn’t recommend going back and re-writing your old Objective C code, but maybe replace bits and pieces of it as you see fit).

(If you’re a beginner to iOS development, see my thoughts on Swift for Beginners)

Idiomatic Swift

One reason I hear for developers wanting to hold off is “Swift is so new there aren’t really accepted idioms or best practices yet, so I’m going to wait a year or two for those to emerge.” I think that’s a fair argument, but I’d argue it’s better for you to jump in and invent them now instead of waiting for somebody else to do it.

I’m pretty sure when I look back on my Swift code a year from now I’ll cringe from embarrassment, but I’d rather be figuring it all out now, I’d rather be helping to establish what Good Swift looks like than just see what gets handed down. The conventions around the language are malleable right now because nothing has been established as Good Swift yet. It’s going to be a lot harder to influence Good Swift a year or two from now.

And the sooner you become productive in Swift, the sooner you’ll find areas where it can be improved. Like the young Swift conventions, Swift itself is a young language—the earlier in its life you file Radars and suggest improvements to the language, the more likely those improvements will be made. Three years from now Swift The Language is going to be a lot less likely to change compared to today. Your early radars today will have enormous effects on Swift in the future.

Swift Learning Curve

Another reason I hear about not wanting to learn Swift today is not wanting to take a major productivity hit while learning the language. From my experience, as an experienced iOS developer is you’ll be up to speed in a week or two with Swift, and then you’ll get all the benefits of Swift (even just not having header files or having to import files all over the place makes programming in Swift so much nicer than Objective C, you might not want to go back).

In that week or two when you’re a little slower at programming than you are with Objective C, you’ll still be pretty productive anyway. You certainly won’t become an expert in Swift (because nobody except maybe Chris Lattner is yet anyway!) right away, but you’ll be writing arguably cleaner code, and, you might even have some fun doing it.

I’ve been keeping a list of tips I’ve picked up while learning Swift so far, like how the compatibility with Objective C works, and how to do certain things (like weak capture lists for closures). If you have any suggestions of your own, feel free to send me a Pull Request.

Grab Bag of Various Caveats

  • I don’t fully understand initializers in Swift yet, but I kind of hate them. I get the theory behind them, that everything strictly must be initialized, but in practice this super sucks. This solves a problem I don’t think anybody really had.

  • Compile times for large projects suck. They’re really slow, because (I think) any change in any Swift file causes all the Swift files to be recompiled on build. My hunch is by breaking your project up into smaller Modules (aka Frameworks) this should relieve the slow build times. I haven’t tried this yet.

  • The Swift debugger feels pretty non-functional most of the time. I’m glad we have Playgrounds to test out algorithms and such, but unfortunately I’ve had to mainly resort to pooping out println()s of debug values.

  • What the hell is with the name println()? Would it have killed them to actually spell out the word printLine()? Do the language designers know about autocomplete?

  • The “implicitly unwrapped optional operator” (!) should really either be called the “subversion operator” or the “crash operator.” The major drumbeat we’ve been told about Swift is it’s supposed to not allow you to do unsafe things, hence (among other things) we have Optionals. By using the implicitly unwrapping the optional, we’re telling the compiler “I know better than you right now, so I’m just going to go ahead and subvert the rules and pretend this thing isn’t nil, because I know it’s not.” When you do this, you’re either going to be correct, in which case Swift was wrong for thinking a value was maybe going to be nil when it isn’t; or you’re going to be incorrect and cause your application to crash.

Objective Next

Earlier this year, before Swift was announced, I published an essay, Objective Next which discussed replacing Objective C, both in terms of the language itself and, more importantly, what we should thirst for in a successor:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

In short, a replacement for Objective C that just offers a slimmed down syntax isn’t really a real victory at all. It’s simply a new-old thing. It’s a new way to accomplish the exact same kind of software. In a followup essay, I wrote:

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do. […]

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

This, unfortunately, is exactly what we got with Swift. Swift is a better way to create the exact same kind of software we’ve been making with Objective C. It may crash a little less, but it’s still going to work exactly the same way. And in fact, because Swift is far more static than Objective C, we might even be a little bit more limited in terms of what we can do. For example, as quoted in a recent link:

The quote in this post’s title [“It’s a Coup”], from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

I still think Swift is a great language and you should use it, but I do find it lamentably not forward-thinking. The intentional lack of a garbage collector really sealed the deal for me. Swift isn’t a new language; it’s C++++. I am glad to get to program in it, and I think the more people using it today, the better it will be tomorrow.

Belief

I’ve had this thought stuck in my head for a few months about Beliefs I thought might be useful to share. The thought goes something like this:

A belief about something is scaffolding we should use until we’ve learned more truths about that something.

First, I should point out I don’t think this statement is necessarily entirely true (though it could be), but I do think it’s a useful starting point for a discussion. Second, I also don’t think this view on belief is widely practiced, but I do think it would make for more productive use of beliefs themselves.

We humans tend to be a very belief-based bunch. There are the obvious beliefs like religion and other similar deifications (“What would our forefathers think?”) but we hold strong beliefs all the time without even realizing it.

The public education systems in North America (as I experienced firsthand in Canada and as I’ve read about in America) are based on students believing and internalizing a finite set of “truths” (this is known as a curriculum) and taking precisely those beliefs as granted.

Science presents perhaps the best evidence we’re largely a belief-based species as science exists to seek truths our beliefs are not adapted to explaining. Before the invention of science, we relied on our beliefs to make sense of the world as best we could, but beliefs painted a blurry, monochromatic picture at best. Science is hard because it has to be hard—its job is to adapt parts of the universe which we can’t intuit into something we can place in our concept of reality—but it does a much superior job at explaining reality than our beliefs do.

A friend of mine recently told me “I have beliefs about the world just like everybody else…I just don’t trust them, is all” I think that’s a productive way to think about beliefs. It would probably be impossible to rid the world of belief, but I think a better approach is to acknowledge and understand belief as a useful, temporary tool. We should teach people to think about belief as a useful means to an end, as a support system, until more is learned about something. Most importantly, we should teach that beliefs should have a shelf-life, and not be permanently trusted.

The Future Programming Manifesto  

Jonathan Edwards:

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

It’s a Coup  

Michael Tsai:

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

Questions!

Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?

Bicycle

The bicycle is a surprisingly versatile metaphor and has been on my mind lately. Here are all the uses I could think of of the bicycle as a metaphor used when talking about computing.

Perhaps the most famous use is by Steve Jobs, who apparently wanted to rename the Macintosh to “Bicycle”. Steve explains why in this video:

I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

On the other end of the metaphor spectrum, we have Doug Engelbart, who believed powerful tools required powerful training for users to realize their full potential, but that the world is more satisfied with dumbed down tools:

[H]ow do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain. It’s still the orientation of automating what you used to do instead of moving to a whole new domain in which you are obviously going to learn quite a few new skills.

And again from Belinda Barnet’s Memory Machines:

[Engelbart]: ‘Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options.’

This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult; there is resistance to learning new techniques, especially if they require changes to the human system. But the extra mobility we could gain from particular technical objects and techniques makes it worthwhile.

Finally, perhaps my two favourite analogies come from Alan Kay. Much like Engelbart, Alan uses the bicycle as a metaphor for learning:

I think that if somebody invented a bicycle now, they couldn’t get anybody to buy it because it would take more than five minutes to learn, and that is really pathetic.

But Alan has a more positive metaphor for the bicycle (3:50), which gives me some hope:

The great thing about a bike is that it doesn’t wither your physical attributes. It takes everything you’ve got, and it amplifies that! Whereas an automobile puts you in a position where you have to decide to exercise. We’re bad at that because nature never required us to have to decide to exercise. […]

So the idea was to try to make an amplifier, not a prosthetic. Put a prosthetic on a healthy limb and it withers.

Humans Need Not Apply  

Here’s a light topic for the weekend:

This video combines two thoughts to reach an alarming conclusion: “Technology gets better, cheaper, and faster at a rate biology can’t match” + “Economics always wins” = “Automation is inevitable.”

This pairs well with the book I’m currently reading, Nick Bostrom’s Superintelligence and there’s an interesting discussion on reddit, too.

It’s important to remember even if this speculation is true and in the future humans are largely unemployable, that there are other things for a human to do than just work.

Documentation  

Dr. Drang:

There seems to be belief among software developers nowadays that providing instructions indicates a failure of design. It isn’t. Providing instructions is a recognition that your users have different backgrounds and different ways of thinking. A feature that’s immediately obvious to User A may be puzzling to User B, and not because User B is an idiot.

You may not believe this, but when the Macintosh first came out everything about the user interface had to be explained.

Agreed. Of course you have to have a properly labeled interface, but that doesn’t mean you can’t have more powerful features explained in documentation. The idea that everything should be “intuitive” is highly toxic to creating powerful software.

Jef Raskin on “Intuitive Interfaces”  

Jef Raskin:

My subject was an intelligent, computer-literate, university-trained teacher visiting from Finland who had not seen a mouse or any advertising or literature about it. With the program running, I pointed to the mouse, said it was “a mouse”, and that one used it to operate the program. Her first act was to lift the mouse and move it about in the air. She discovered the ball on the bottom, held the mouse upside down, and proceeded to turn the ball. However, in this position the ball is not riding on the position pick-offs and it does nothing. After shaking it, and making a number of other attempts at finding a way to use it, she gave up and asked me how it worked. She had never seen anything where you moved the whole object rather than some part of it (like the joysticks she had previously used with computers): it was not intuitive. She also did not intuit that the large raised area on top was a button.

But once I pointed out that the cursor moved when the mouse was moved on the desk’s surface and that the raised area on top was a pressable button, she could immediately use the mouse without another word. The directional mapping of the mouse was “intuitive” because in this regard it operated just like joysticks (to say nothing of pencils) with which she was familiar.

From this and other observations, and a reluctance to accept paranormal claims without repeatable demonstrations thereof, it is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

And

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Nobody knew how to use an iPhone before they saw someone else do it. There’s nothing wrong with more powerful software that requires a user to learn something.

The Wealth of Applications  

Graham Lee:

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

full-stack developer

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

On market demand:

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

I think you can answer this question if you frame most modern software as “entertainment” (or at least, Apps are Websites). It’s certainly not the case that all software is entertainment, but perhaps for the vast majority of people software as they know it is much closer to movies and television than it is to references or mental tools. The only difference is, software has perhaps completed the ultimate wet dream of the entertainment market in that Pop Software doesn’t even really have personalities like music or TV do — the personality is solely that of the brand.

Off and On

I started working on a side project in January 2014 and like many of my side projects over the years, after an initial few months of vigorous work, the last little while has been mostly off and on work on the project.

The typical list of explanations applies: work gets in the way (work has been a perpetual crunch mode for months now), the project has reached a big enough size that it’s hard to make changes (I’m on an unfamiliar platform), and I’m stuck at a particularly difficult problem (I saved the best for last!).

Since the summer has been more or less fruitless while working on this project, I’m taking a different approach going forward, one I’ve used to some success in the past. It comes down to three main things:

  1. Focus the work to one hour per day, usually in the morning. This causes me to get at least something done once every day, even if it’s just small or infrastructure work. I’ve found limiting to a small amount of time (two hours works well too) also forces me to not procrastinate or get distracted while I’m working. The hour of side project becomes precious and not something to waste.

  2. Stop working when you’re in the middle of something so you have somewhere to ramp up with next time you start (I’m pretty sure this one is cribbed directly from Ernest Hemingway).

  3. Keep a diary for your work. I do this with most of my projects by just updating a text file every day after I’m finished working with what my thoughts were for the day. I usually write about what I worked on and what I plan on working on for the next day. This complements step 2 because it lets me see where I left off and what I was planning on doing. It also helps bring any subconscious thoughts about the project into the front of my brain. I’ll usually spend the rest of the day thinking about it, and I’ll be eager to get started again the next day (which helps fuel step 1, because I have lots of ideas and want to stay focused on them — it forces me to work better to get them done).

That, and I’ve set a release date for myself, which should hopefully keep me focused, too.

Here goes.

Transliterature, a Humanist Design  

Ted Nelson:

You have been taught to use Microsoft Word and the World Wide Web as if they were some sort of reality dictated by the universe, immutable “technology” requiring submission and obedience.

But technology, here as elsewhere, masks an ocean of possibilities frozen into a few systems of convention.

Inside the software, it’s all completely arbitrary. Such “technologies” as Email, Microsoft Windows and the World Wide Web were designed by people who thought those things were exactly what we needed. So-called “ICTs"– “Information and Communication Technologies,” like these– did not drop from the skies or the brow of Zeus. Pay attention to the man behind the curtain! Today’s electronic documents were explicitly designed according to technical traditions and tekkie mindset. People, not computers, are forcing hierarchy on us, and perhaps other properties you may not want.

Things could be very different.

JUMP Math: Multiplying Potential (PDF)  

John Mighton:

Research in cognitive science suggests that, while it is important to teach to the strengths of the brain (by allowing students to explore and discover concepts on their own), it is also important to take account of the weaknesses of the brain. Our brains are easily overwhelmed by too much new information, we have limited working memories, we need practice to consolidate skills and concepts, and we learn bigger concepts by first mastering smaller component concepts and skills.

Teachers are often criticized for low test scores and failing schools, but I believe that they are not primarily to blame for these problems. For decades teachers have been required to use textbooks and teaching materials that have not been evaluated in rigorous studies. As well, they have been encouraged to follow many practices that cognitive scientists have now shown are counterproductive. For example, teachers will often select textbooks that are dense with illustrations or concrete materials that have appealing features because they think these materials will make math more relevant or interesting to students. But psychologists such as Jennifer Kaminski have shown that the extraneous information and details in these teaching tools can actually impede learning.

(via @mayli)

Idle Creativity  

Andy Matuschak (in 2009):

When work piles up, my brain doesn’t have any idle cycles. It jumps directly from one task to another, so there’s no background processing. No creativity! And it feels like all the color and life has been sucked out of the world.

I don’t mind being stressed or doing lots of work or losing sleep, but I’ve been noticing that I’m a boring person when it happens!

The Colour and the Shape: Custom Fonts in System UI  

Dave Wiskus:

When the user is in your app, you own the screen.

…Except for the status bar — that’s Helvetica Neue. And share sheets. And Alerts. And in action sheets. Oh, and in the swipe-the-cell UI in iOS 8. In fact any stock UI with text baked in is pretty much going to use Helvetica Neue in red, black, and blue. Hope you like it.

Maybe this is about consistency of experience. Perhaps Apple thinks that people with bad taste will use an unreadable custom font in a UIAlert and confuse users.

I agree with Dave the lack of total control is vexing, but I think that’s because with these system features, alert views and the status bar, Apple wants us to treat them more or less like hardware. They’re immutable, they “come from the OS” like they’re appearing on Official iOS Letterhead paper.

This is why I think Apple doesn’t want us customizing these aspects of iOS. They want to keep the “official” bits as untampered with as possible.

Apps are Websites

The Apple developer community is atwitter this week about independent developers and whether or not they can earn a good living working independently on the Mac and or iOS platforms. It’s a great discussion about an unfortunately bleak topic. It’s sad to hear so many great developers, working on so many great products, are doing so poorly from it. And it seems like a lot of it is mostly out of their control (if I thought I knew a better way, I’d be doing it!). David Smith summarizes most of the discussion (with an awesome list of links):

It has never been easy to make a living (whatever that might mean to you) in the App Store. When the Store was young it may have been somewhat more straightforward to try something and see if it would hit. But it was never “easy”. Most of my failed apps were launched in the first 3 years of the Store. As the Store has matured it has also become a much more efficient marketplace (in the economics sense of market). The little tips and tricks that I used to be able to use to gain an ‘unfair’ advantage now are few and far between.

The basic gist seems to be “it’s nearly impossible to make a living off iOS apps and it’s possible but still pretty hard to do off OS X.” Most of us I think would tend to agree you can charge more for OS X software than you can for iOS because OS X apps are usually “bigger” or more fleshed out, but I think that’s only half the story.

The real reason why it’s so hard to sell iOS apps is that iOS apps are really just websites. Implementation details aside, 95 per cent of people think of iOS apps the same way they think about websites. Websites that most people are exposed to are mostly promotional, ad-laden and most importantly, free. Most people do not pay for websites. A website is just something you visit and use, but it isn’t a piece of software, and this is the exact same way they think of and treat iOS apps. That’s why indie developers are having such a hard time making money.

(Just so we’re clear, I’ve been making iOS apps the whole duration of the App Store and I know damn well iOS apps are not “websites.” I’m well aware they are contained binaries that may-or-may-not use the internet or web services. I’m talking purely about perceptions here)

For a simple test, ask any of your non-developer friends what the difference between an “app” and an “application” or “program” is and I’d be willing to bet they think of them as distinct concepts. To most people, “apps” are only on your phone or tablet, and programs are bigger and on your computer. “Apps” seem to be a wholly different category of software from programs like Word or Photoshop, and the idea that Mac and iOS apps are basically the same on the inside doesn’t really occur to people (nor does it need to, really). People “know” apps aren’t the same thing as programs.

Apps aren’t really “used” so much as they are “checked” (how often do people “check Twitter” vs “use Twitter”?) which is usually a brief “visit” measured in seconds (of, ugh, “engagement”). Most apps are used briefly and fleetingly, just like most websites. iOS, then, isn’t so much an operating system but a browser and the App Store its crappy search engine. Your app is one of limitless other apps just like your website is one of limitless other websites too. The ones people have heard of are promoted and advertised, or the ones in their own niches.

I don’t know how to make money in the App Store, but if I had to I’d try to learn from financially successful websites. I’d charge a subscription and I’d provide value. I’d make an app that did something other than have a “feed” or a “stream” or “shared moments.” I’d make an app that help people create or understand. I’d try new things.

I couldn’t charge $50 for an “app” because apps are perceived as not having that kind of value which I have to agree with (I know firsthand how much works goes in to making an app, but that doesn’t make the app valuable), so maybe we need to create a new category of software on iOS, one that breaks out of the “app” shell (and maybe breaks out of the moniker, too). I don’t know what that entails, but I’m pretty sure that’s what we need.