Anti-fragile UX

cognitions, design, design thinking, strategy

This is a repost of an idea I’ve dreamt of for nearly a decade (and leveraged to help improve design thinking and approaches, though not to the extent described below). Now, in this time of AI, global audiences, and awareness of accessibility, it seems this could be possible. (Please note: some links now go to the Wayback Machine capture of a site.)


Nobody wants a fragile user experience. The thoughts that come to mind when you imagine such a site are probably buggy, not very usable, difficult to navigate, limited compatibility, and most definitely not user-friendly.

Now imagine a robust web app. This site would work across most if not all browser and devices, “gracefully degrading” when necessary. It would be usable, useful, and user-friendly, fulfilling the promise of site for the user. Bugs would be a rare event.

After reading Nassim Taleb’s antifragility discussion on Edge’s World Question Center, I think we can do better. As Taleb envisions it, an antifragile system is one that is “beyond robustness,” one that not only withstands disorder and change, but loves those things. Taleb provides an example:

Just as a package sent by mail can bear a stamp “fragile”, “breakable” or “handle with care”, consider the exact opposite: a package that has stamped on it “please mishandle” or “please handle carelessly”. The contents of such package are not just unbreakable, impervious to shocks, but have something more than that, as they tend to benefit from shocks.

So let us coin the appellation “antifragile” for anything that, on average, …benefits from variability.

In this and following posts, I’m going to discuss what the characteristics of an anti-fragile web app might look like. These include (but are not necessarily limited to):

  • A self-refining interface. The more browsers, devices, and user preferences it’s exposed to, the better it can refine itself, and predict or suggest the ideal UI for a given user with a given browser or device.
  • Self-refining taxonomy. A content strategy that benefits from variety and size. I’m convinced that in the post-Google, post-UX, post-social media world, semantic information management in all forms will be the next big thing. (Note: by post-Google, post-UX, etc., I don’t mean a world existing without those things. Rather, I mean the world that has thoroughly incorporated these and similar game-changing concepts and is ready to grow from there.)
  • Simplicity of structure, allowing flexibility of response.
  • Loves change. Learns from being used for new and unexpected purposes, adapting the new ability or use to improve or expand existing features.
  • The broader and more varied the audience, the more information there is to develop targeted content and interfaces.

self-refining interface

What on earth is a self-refining interface? A self-refining interface is one that adjusts itself to user needs, either at an aggregate or individual level. Ideally it would do both.

Today we have a plethora of interfaces with which to browse the web. Notepads, smart phones, PDAs, laptops, televisions and more are used to present online information. There are even a few awkward-looking wristwatches receiving online updates, heralding the arrival of the smart gadget. The Pew Internet & American Life Project reports a sharp increase in adults using mobile devices to access the internet, as well as other online activities. Cell phone ownership is stable, but using phones for purposes other than phone calls is going up, up, up.

This marks the beginning of the end of pixel-perfect web design. No longer is there a single fold, above which content cues should reside; no longer can a company focus solely on meeting their audience’s needs by designing for the top three browsers across the top two computer operating systems. Graceful degradation is going the way of the dodo. Instead, we need evolutionary designs, adaptable to a variety of niches.

Companies who have already focused on this typically seek to determine the device being used by a particular user, then serve them content optimized for that device. Unfortunately, with the broad variety of devices in use, it’s difficult to accommodate all of them. Alternatively, they offer a “mobile” or “text-only” link, optimized for users with low bandwidth or smaller mobile devices. Again, we have only a couple of optimizations, and as user trends change, the developers behind a given web application or site must run to keep up.

Built-in design adaptability might work in many cases. For example, a combination of incrementally sized, wrapping modules and liquid layout could flexibly accommodate both broader and shorter resolutions (the Xoom’s resolution, for example, is 1280 x 800). Navigation could be persistent, but fly out on mouseover. Tricky to do, but not impossible. There is no “graceful degradation” because all resolutions are intended to happen. But this is merely robust.

What if the web application itself took this optimization a step further? Imagine these scenarios:

A site that actively analyzes user system demographics and develops UI and navigation options for a variety of interfaces; users can select their preferred default. Depending on the intelligence of the system, these could be based on persona types, or actually customized on a user-by-user basis.

Proactively personalized interface preferences. Based on a user’s interaction behavior, the site infers their content and navigational preferences and presents or suggests an interface matching those. Do they like clicking on tags? Perhaps a tag cloud-driven navigation should be integrated into their UI.

To be honest, I’m not certain what a truly antifragile user experience would look like. But I know we’ll never get there if we don’t think about it; and thinking about it will bring us more robust UX along the way.

references


27 February 2011

Originally posted on UXtraordinary. See the archived original post.

Aldrich on educational engagement (and PowerPoint)

inspiration

Simulations are powerful when students need to be engaged more than they are. Clearly, this is an area in which distributed classrooms have suffered, as death by PowerPoint has not just been refined in many programs but almost weaponized to military specifications.

— Clark Aldrich

UX happens everywhere

design thinking, psychology

“My experience is what I agree to attend to,” said William James. Although James wasn’t talking about user experience as designers think of it, this is my favorite UX quote, and one I believe every UX architect, designer, or strategist should keep in mind. Today I’m writing about the implications this has on where we should focus our attention.

Where a person’s attention goes, there goes their experience of the world. In other words, UX happens everywhere.

Your product may be the ultimate experience you want your users to have, and your web site experience may help get them to purchase it (or be the goal itself, if you’re a social network or some other online service). But long before they land on your site or purchase your product, every interaction of the user with your brand is UX.

What people say about your product on social networks or blogs, your advertising (online and off), how your competitors represent you and your service. Your content lives everywhere, and your existing users and prospects can potentially encounter it everywhere. You can’t control this, but you can add to the milieu in a variety of ways: blogs, forums, social networks, videos, mobile applications, gadgets, rich media advertising, news, and choosing to advertise on more targeted sites.

Why does this matter? Because people make decisions in an all-or-nothing manner. Neurologically speaking, every encounter creates a positive or negative moments in a user’s head—a yes/no binary decision. A user’s overall impression comes from the preponderance of the individual binary choices associated with a concept.

Further, in the absence of knowledge most people tend to go with whatever information gets in first with the most. In this way informational cascades are spread across a population which may or may not be accurate. (This may be why car salespeople are trained to get customers to say “yes” more than once, and to speak to more than one salesperson. You can read more about binary decision making and informational cascades in The tyranny of dichotomy.)

If you expect users to “agree to attend” to ultimately experience your product, one way is to create more positive binary moments about your brand and product than there are negative ones. Every encounter with your brand weights a user’s interest in one direction or another. As UX strategists, it’s clearly in our interests as UX strategists to create positive user experiences in every relevant context possible.

Update: I don’t think I said clearly enough here that “positive” requires an experience to be honest and to the user’s advantage. So I’m saying it now.


Originally posted on the alexfiles (1998–2018) on January 1, 2011.

The tyranny of dichotomy

psychology

An informational cascade is a perception—or misperception—spread among people because we tend to let others think for us when we don’t know ourselves. For example, recently John Tierney (tierneylab.blog.nytimes.com) discussed the widely held belief but little-supported belief that too much fat is nutritionally bad. Peter Duesberg contends that the HIV hypothesis for AIDS is such an error (please note, I am not agreeing with him).

Sometimes cultural assumptions can lead to such errors. Stephen Gould described countless such mistakes, spread by culture or simple lack of data, in The Mismeasure of Man. Gould points out errors such as reifying abstract concepts into entities that exist apart from our abstraction (as has been done with IQ), and forcing measurements into artificial scales, both assumptions that spread readily within and without the scientific community without any backing.

Mind, informational cascades do not have to be errors—one could argue that the state of being “cool” comes from an informational cascade. Possibly many accurate understandings come via informational cascades as well, but it’s harder to demonstrate those because of the nature of the creatures.

It works like this: people tend to think in binary, all-or-nothing terms. Shades of gray do not occur. In fact, it seems the closest we come to a non-binary understanding of a concept is to have many differing binary decisions about related concepts, which balance each other out.

So, in the face of no or incomplete information, we take our cues from the next human. When Alice makes a decision, she decides yes-or-no; then Bob, who knows nothing of the subject, takes his cue from Alice in a similarly binary fashion, and Carol takes her cue from Bob, and so it spreads, in a cascade effect.

Economists and others rely on this binary herd behavior in their calculations.

But.

The problem is that people don’t always think this way; therefore people don’t have to think this way. Some people seem to have the habit of critical thought at an early age. As well, the very concept of binary thinking seems to fit too neatly into our need to measure. It’s much easier to measure all-or-nothing than shades of gray, so a model that assumes we behave in an all-or-nothing manner can easily be measured, and is therefore more easily accepted within the community of discourse.

Things tend to be more complex than we like to acknowledge. As Stephan Wolfram observed in A New Kind of Science,

One might have thought that with all their successes over the past few centuries the existing sciences would long ago have managed to address the issue of complexity. But in fact they have not. And indeed for the most part they have specifically defined their scope in order to avoid direct contact with it.

Which makes me wonder if binary classification isn’t its own informational cascade. In nearly every situation, there are more than two factors and more than two options.

The tradition of imposing a binary taxonomy our world goes back a long way. Itkonen (2005) speaks about the binary classifications that permeate all mythological reasoning. By presenting different quantities as two aspects of the same concept, they are made more accessible to the listener. By placing them in the concept the storyteller shows their similarities, and uses analogical reasoning to reach the audience.

Philosophy speaks of the law of the excluded middle—something is either this or that, and not an in between—but this is a trick of language. A question that asks for only a yes or no answer does not allow for responses such as both or maybe.

Neurology tells us that neurons either fire or they don’t. But neurons are much more complex than that. From O’Reilly and Munakata’s Computational Explorations in Cognitive Neuroscience (italics from the authors, boldface mine):

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature… Neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs…The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true….

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values….

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other—instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contribution). This fact gives rise to the phenomenon of graceful degradation, where function degrades “gracefully” with increasing amounts of damage to neural tissue.

So, now we have a clue that binary thinking may be an informational cascade all its own, what do we do about it?


References

Itkonen, E. (2005). Analogy as structure and process: Approaches in linguistics, cognitive psychology and philosophy of science. Amsterdam: John Benjamins Publishing.

O’Reilly, R.C., and Y. Munakata. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.


Originally posted on alexfiles.com (1998–2018) on May 5, 2008.