Simplicity is not a goal, but a tool

design, design thinking

Simplicity in design is not a goal by itself, but a tool for better experience. The goal is the need of the moment: to sell a product, to express an opinion, to teach a concept, to entertain. While elegance and optimal function in design frequently overlaps with simplicity, there are times that simplicity is not only not possible but hurts usability. Yet many designers do not understand this, and over the years, I’ve seen the desire to “keep it simple, stupid,” lead to poor UX.

I was therefore glad to see Francisco Inchauste’s well-thought, longer version of Einstein’s “as simple as possible, but no simpler” remark.

From the column:

As an interactive designer, my first instinct is to simplify things. There is beauty in a clean and functional interface. But through experience I’ve found that sometimes I can’t remove every piece of complexity in an application. The complexity may be unavoidably inherent to the workflow and tasks that need to be performed, or in the density of the information that needs to present. By balancing complexity and what the user needs, I have been able to continue to create successful user experiences.

Plus, as I’ve commented before, messy is fun!


Originally posted on former personal blog UXtraordinary.com.

Evolutional UX

design, design thinking

This was originally posted on the UXtraordinary blog, before I incorporated under that name. Since then this approach has proven successful for me in a variety of contexts, especially Agile (including Scrum, kanban, and Lean UX – which is an offshoot of Agile whether it likes it or not).


I subscribe to the school of evolutional design. In evolution, species change not to reach for some progressively-closer-to-perfection goal, but in response to each other and their ever-changing environment. My user experience must do likewise.

Rather than reach for pixel-perfect, which is relatively unattainable outside of print, (and is probably only “perfect” to myself and possibly my client), I reach for what’s best for my users, which is in the interests of my client. I expect that “best” to change as my users change, and as my client’s services/products change. This approach makes it much easier to design for UX.

Part of evolutional design is stepping away from the graceful degradation concept. The goal is not degraded experience, however graceful, but differently adapted experience. In other words, it’s not necessary that one version of a design be best. Two or three versions can be equally good, so long as the experience is valuable. Think of the differences simply resizing a window can have on well-planned liquid design, without hurting usability. Are the different sizes bad? Of course not.

This approach strongly supports behavioral design, in which design focuses on the behavior and environment of the user. You might be designing for mobile, or a laptop, or video, or an e-newsletter; you might be designing for people being enticed to cross a pay wall, or people who have already paid and are enjoying your service. You might be appealing to different demographics in different contexts. Evolutional UX thinks in terms of adaptation within the digital (and occasionally analog) ecology.

Evolutional UX also reminds the designer that she herself is part of an evolving class of worker, with many species appearing and adapting and mutating and occasionally dying out. We must adapt, or fall out of the game—and the best way to do that is to design for your ever-changing audience and their ever-changing tools.

And now, some words of wisdom from that foremost evolutional ecologist, Dr. Seuss. Just replace the “nitch” spelling with “niche” and you’ve got sound ecological theory, as every hermit crab knows.

And NUH is the letter I use to spell Nutches,
Who live in small caves, known as Nitches, for hutches.
These Nutches have troubles, the biggest of which is
The fact there are many more Nutches than Nitches.
Each Nutch in a Nitch knows that some other Nutch
Would like to move into his Nitch very much.
So each Nutch in a Nitch has to watch that small Nitch
Or Nutches who haven’t got Nitches will snitch.


Designing for purpose

design thinking

This is the first of several presentations applying different psychological systems to user experience.

Designing for users is a tough job. To optimize our designs and strategy, UX professionals frequently turn to concept/site testing. The problem is that most design strategy and testing thinks in terms of input → output. We provide input, users perform a desired response (click-through, purchase, content creation). How to break out of this mold?

Perceptual control theory (PCT) assumes that all output is based on the ultimate goal of improved perceptual input. If you replace “input” in the previous sentence with “experience,” you’ll see the direction this discussion is going…


Originally posted on UXtraordinary, August 3, 2009.

Fun is fundamental

design thinking, game elements, psychology

Fun is a seriously undervalued part of user experience (perhaps of any experience). In fact, a sense of play may be a required characteristic of good UX interaction. But too often, I hear comments like the following, seen on ReadWriteWeb:

When you think of virtual worlds, the first one that probably pops into your head is Second Life, but in reality, there are a number of different virtual worlds out there. There are worlds for socializing, worlds for gaming, even worlds for e-learning. But one thing that most virtual worlds have in common is that they are places for play, not practicality. (Yes, even the e-learning worlds are designed with elements of “fun” in mind).

I was surprised to see the concept of play set in tension with practicality, as if they were incompatible, and to read that “even the e-learning worlds” employed fun. Game elements have been used to promote online learning for well over a decade, and used in offline educational design for much longer.

I certainly don’t mean to imply that every web site can be made fun. But it can employ the techniques of play in order to be more fun. As Clark Aldrich observes, discussing learning environments (emphasis his),

You cannot make most content fun for most people in a formal learning program… At best what you can do is make it more fun for the greatest percentage of the target audience. Using a nice font and a good layout doesn’t make reading a dry text engaging, but it may make it more engaging

The driving focus, the criteria against which we measure success, should be on making content richer, more engaging, more visual, with better feedback, and more relevant. And of course more fun for most students.

It was while developing an educational site for Nortel Networks that I first discovered the value of game elements in design. Deliberately incorporating mini games, an ongoing “quest” hidden in the process, rewards (including surprise Easter eggs), levels, triggers, and scores (with a printable certificate) made the tedious process of learning how to effectively make use of an intranet database much more fun. We also offered different learning techniques, so users could learn by text, video, or audio, as they preferred.

This can apply to non-learning environments as well. Think about it: online games have already done all the heavy lifting in figuring out the basics of user engagement. Some techniques I’ve found valuable in retail, informational, and social media include:

  • Levels. These provide a sense of achievement for exploration, UGC (user-generated content) or accomplishment. Levels can reduce any possible sense of frustration at the unending quest.
  • Unending quest. There should always be a next step for users. This doesn’t mean the user needs to be told that they’ll never be through with the site. Instead, it should always provide something engaging, that leads them on to a next step, and a next, and so forth.
  • Surprise rewards/triggers. These include Easter egg links, short-term access to previously inaccessible documents, etc.
  • Mini games, which can result in recognition or rewards for the user and can provide research data and UGC for the site.
  • Scores, which can encourage competitiveness and a sense of accomplishment.
  • Avatars and other forms of personalization.
  • User-driven help and feedback. Users (particularly engineers, in my experience) love to be experts. Leverage this to support your help forums if you need them.

Online, offline, crunching numbers at work, immersed in a game, sitting in a classroom, or building a barn, a sense of fun doesn’t just add surface emotional value, it frequently improves the quality of the work and adds pleasant associations, making us more likely to retrieve useful data for later application. Perhaps this is why so many artists and scientists have been known for a sense of play. And for most of us, it’s during childhood – the time we are learning the most at the fastest rate – that we are typically our most playful.

All websites are to some extent educational. Even a straightforward retail site wants you to learn what they offer, how to choose an item, and how to pay for it. Perhaps we can take a tip from our childhood and incorporate more fun into the user experience. Then we can learn how best to learn.

Originally posted on former personal blog UXtraordinary.com.

The tyranny of dichotomy

psychology

An informational cascade is a perception—or misperception—spread among people because we tend to let others think for us when we don’t know ourselves. For example, recently John Tierney (tierneylab.blog.nytimes.com) discussed the widely held belief but little-supported belief that too much fat is nutritionally bad. Peter Duesberg contends that the HIV hypothesis for AIDS is such an error (please note, I am not agreeing with him).

Sometimes cultural assumptions can lead to such errors. Stephen Gould described countless such mistakes, spread by culture or simple lack of data, in The Mismeasure of Man. Gould points out errors such as reifying abstract concepts into entities that exist apart from our abstraction (as has been done with IQ), and forcing measurements into artificial scales, both assumptions that spread readily within and without the scientific community without any backing.

Mind, informational cascades do not have to be errors—one could argue that the state of being “cool” comes from an informational cascade. Possibly many accurate understandings come via informational cascades as well, but it’s harder to demonstrate those because of the nature of the creatures.

It works like this: people tend to think in binary, all-or-nothing terms. Shades of gray do not occur. In fact, it seems the closest we come to a non-binary understanding of a concept is to have many differing binary decisions about related concepts, which balance each other out.

So, in the face of no or incomplete information, we take our cues from the next human. When Alice makes a decision, she decides yes-or-no; then Bob, who knows nothing of the subject, takes his cue from Alice in a similarly binary fashion, and Carol takes her cue from Bob, and so it spreads, in a cascade effect.

Economists and others rely on this binary herd behavior in their calculations.

But.

The problem is that people don’t always think this way; therefore people don’t have to think this way. Some people seem to have the habit of critical thought at an early age. As well, the very concept of binary thinking seems to fit too neatly into our need to measure. It’s much easier to measure all-or-nothing than shades of gray, so a model that assumes we behave in an all-or-nothing manner can easily be measured, and is therefore more easily accepted within the community of discourse.

Things tend to be more complex than we like to acknowledge. As Stephan Wolfram observed in A New Kind of Science,

One might have thought that with all their successes over the past few centuries the existing sciences would long ago have managed to address the issue of complexity. But in fact they have not. And indeed for the most part they have specifically defined their scope in order to avoid direct contact with it.

Which makes me wonder if binary classification isn’t its own informational cascade. In nearly every situation, there are more than two factors and more than two options.

The tradition of imposing a binary taxonomy our world goes back a long way. Itkonen (2005) speaks about the binary classifications that permeate all mythological reasoning. By presenting different quantities as two aspects of the same concept, they are made more accessible to the listener. By placing them in the concept the storyteller shows their similarities, and uses analogical reasoning to reach the audience.

Philosophy speaks of the law of the excluded middle—something is either this or that, and not an in between—but this is a trick of language. A question that asks for only a yes or no answer does not allow for responses such as both or maybe.

Neurology tells us that neurons either fire or they don’t. But neurons are much more complex than that. From O’Reilly and Munakata’s Computational Explorations in Cognitive Neuroscience (italics from the authors, boldface mine):

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature… Neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs…The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true….

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values….

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other—instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contribution). This fact gives rise to the phenomenon of graceful degradation, where function degrades “gracefully” with increasing amounts of damage to neural tissue.

So, now we have a clue that binary thinking may be an informational cascade all its own, what do we do about it?


References

Itkonen, E. (2005). Analogy as structure and process: Approaches in linguistics, cognitive psychology and philosophy of science. Amsterdam: John Benjamins Publishing.

O’Reilly, R.C., and Y. Munakata. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.


Originally posted on alexfiles.com (1998–2018) on May 5, 2008.

Excluding data limits thought

design thinking

From illustrations in Stephen Jay Gould’s “Wonderful Life;” these creatures were misidentified for decades because of thought-limiting taxonomies. Stippled ink, watercolor.

I have never understood the desire to delete articles in Wikipedia solely on the basis of the highly subjective concept of “notability,” and I’ve fought against deletion of such articles. It’s easy to store the information, and it’s useful to someone or it wouldn’t be there. To these reasons I would add another: the more information you have, the more freedom you have to think flexibly about a subject.

Nicholson Baker supports the concept of a Deletopedia, a wikimorgue where all the “nonnotable” articles removed by the frustrated book-burners on Wikipedia would reside. Baker describes it:

…a bin of broken dreams where all rejects could still be read, as long as they weren’t libelous or otherwise illegal. Like other middens, it would have much to tell us over time.

Why, exactly, is this useful? Because we need taxonomic freedom.

A taxonomy is only as free as its data. The more categories you have—the more data—the more ways a given piece can move from one category to another and be connected—then the more flexibly and creatively you can arrange and understand the data. Not only does the freedom to connect and associate a given piece of data help, but each piece of data increases the number of patterns possible.

How we understand information is driven by the taxonomies—the patterns—we place it in. As Marvin Minsky said, You don’t understand anything until you learn it more than one way. The biologists have known this for some time. Initially biological species classification was based primarily on anatomy and phenotype. But there are many ways to think about organisms: according to evolutionary ancestry (cladistics), according to geography, according to the niche they occupy ecologically, to name a few. What taxonomy you choose to use determines how you’re able to perceive and understand a given organism or system.

The moment you begin to exclude and include along any lines, you begin to enforce a taxonomy of sorts. The taxonomies we use determine and limit the direction and options of our thought. We need to apply them to look at things from a given perspective, but we need to be aware of them so we can change them and see different perspectives. So, thinking in terms of deleting what is not notable is implicitly applying a self-limiting taxonomy. You will not be able to change your perspective to one that makes use of the deleted information, because you will not have the information.

This tendency by some to ignore or remove information that does not fit into their personal taxonomy of relevance is present in library cataloging, too. As a former online cataloger myself, I’m also in support of keeping analog card catalogs as well as digital. Having project-managed teams that converted card catalogs into databases, I’ve seen first-hand how subjective the choices of what pieces of information on the card get migrated onto the database can be. I think every piece of data should be online, but there are plenty of catalogers who skip over descriptive items they find trivial.

Humans are linguistic souls (even the mostly spatial types like myself), and having a new word or symbol attached to a concept immediately adds a tool to our arsenal of thought. This is why one of the first things repressive regimes do is burn the books and suppress the intellectuals. “All the Nazi or Fascist schoolbooks made use of an impoverished vocabulary, and an elementary syntax, in order to limit the instruments for complex and critical reasoning” (Umberto Eco, 22 June 1995, New York Review of Books). We do ourselves a disservice when we close off possible avenues of thought by disregarding data currently not important to us.

Besides, as Flaubert observed, “Anything becomes interesting if you look at it long enough.”

Maybe Wikipedia should make that its motto.


Originally posted on UXtraordinary.com, March 20, 2008.

Messy is fun: challenging Occam’s razor

design thinking, psychology, taxonomy

The scientific method is the most popular form of scientific inquiry, because it provides measurable testing of a given hypothesis. This means that once an experiment is performed, whether the results were negative or positive, the foundation on which you are building your understanding is a little more solid, and your perspective a little broader. The only failed experiment is a poorly designed one.

So, how to design a good experiment? The nuts and bolts of a given test will vary according to the need at hand, but before you even go about determining what variable to study, take a step back and look at the context. The context in which you are placing your experiment will determine what you’re looking for and what variables you choose. The more limited the system you’re operating in, the easier your test choices will be, but the more likely you are to miss something useful. Think big. Think complicated. Then narrow things down.

But, some say, simple is good! What about Occam’s razor and the law of parsimony (entities should not be unnecessarily multiplied)?

Occam’s razor is a much-loved approach that helps make judgment calls when no other options are available. It’s an excellent rule of thumb for interpreting uncertain results. Applying Occam’s razor, you can act “as if” and move on to the next question, and go back if it doesn’t work out.

Still, too many people tend to use it to set up the context of the question, unconsciously limiting the kind of question they can ask and limiting the data they can study. It’s okay to do this consciously, by focusing on a simple portion of a larger whole, but not in a knee-jerk fashion because “simple is better.” Precisely because of this, several scientists and mathematicians have suggested anti-razors. These do not necessarily undermine Occam’s razor. Instead, they phrase things in a manner that helps keep you focused on the big picture.

Some responses to Occam’s concept include these:

Einstein: Everything should be as simple as possible, but no simpler.

Leibniz: The variety of beings should not rashly be diminished.

Menger: Entities must not be reduced to the point of inadequacy.

My point is not that Occam’s razor is not a good choice in making many decisions, but that one must be aware that there are alternative views. Like choosing the correct taxonomy in systematics, choosing different, equally valid analytic approaches to understand any given question can radically change the dialogue. In fact, one can think of anti-razors as alternative taxonomies for thought: ones that let you freely think about the messy things, the variables you can’t measure, the different perspectives that change the very language of your studies. You’ll understand your question better, because you’ll think about it more than one way. And while you’ll need to pick simple situations to test your ideas, the variety and kind of situations you can look at will be greatly expanded.

Plus, messy is fun.

Originally posted on former personal blog UXtraordinary.com.

Zombie ideas

psychology

In 1974 Robert Kirk wrote about the “zombie idea,” describing the concept that the universe, the circle of life, humanity, and our moment-to-moment existence could all have developed, identically with “particle-for-particle counterparts,” and yet lack feeling and consciousness. The idea is that evolutionally speaking, it is not essential that creatures evolved consciousness or raw feels in order to evolve rules promoting survival and adaptation. Such a world would be a zombie world, acting and reasoning but just not getting it (whatever “it” is).

I am not writing about Kirk’s idea. (At least, not yet.)

Rather, I’m describing the term in the way it was used in 1998, by four University of Texas Health Science Center doctors, in a paper titled, “Lies, Damned Lies, and Health Care Zombies: Discredited Ideas That Will not Die
(pdf). Here the relevant aspect of the term “zombie” is refusal to die, despite being killed in a reasonable manner. Zombie ideas are discredited concepts that nonetheless continue to be propagated in the culture.

While they (and just today, Paul Krugman) use the term, they don’t explicate it in great detail. I thought it might be fun to explore the extent to which a persistent false concept is similar to a zombie.

  • A zombie idea is dead.
    For the vast majority of the world, the “world is flat” is a dead idea. For a few, though, the “world is flat” virus has caught hold, and this idea persists even in technologically advanced cultures.
  • A zombie idea is contagious.
    Some economists are fond of the concept of “binary herd behavior.” The idea is that when most people don’t know about a subject, they tend to accept the view of the person who tells them about it; and they tend to do that in an all-or-nothing manner. Then they pass that ignorant acceptance on to the next person, who accepts it just as strongly. (More about the tyranny of the dichotomy later.) So, when we’re children and our parents belong to Political Party X, we may be for Political Party X all the way, even though we may barely know what a political party actually is.
  • A zombie idea is hard to kill.
    Some zombie viruses are very persistent. For example, most people still believe that height and weight is a good calculator to determine your appropriate calorie intake. Studies, however, repeatedly show that height and weight being equal, other factors can change the body’s response.Poor gut flora, certain bacteria, and even having been slightly overweight in the past can mean that of two people of the same height and weight, one will eat the daily recommended calories and keep their weight steady, and one will need to consume 15% less in order to maintain the status quo. Yet doctors and nutritionists continue to counsel people to use the national guidelines to determine how much to eat.
  • A zombie idea eats your brain.
    Zombie ideas, being contagious and false, are probably spreading through binary thinking. A part of the brain takes in the data, marks it as correct, and because it works in that all-or-nothing manner, contradictory or different data has a harder time getting the brain’s attention. It eats up a part of brain’s memory, and by requiring more processing power to correct it, eats up your mental processing time as well.It also steals all the useful information you missed because your brain just routed the data right past your awareness, thinking it knew the answer.
  • Zombies are sometimes controlled by a sorcerer, or voodoo bokor.
    Being prey to zombie ideas leaves you vulnerable. If you have the wrong information, you are more easily manipulated by the more knowledgeable. Knowledge, says Mr. Bacon, is power.
  • Zombies have no higher purpose than to make other zombies.
    Closely related to the previous point. Even if you are not being manipulated, your decision-making suffers greatly when you are wrongly informed. You are also passing on your wrong information to everyone you talk to about it. Not being able to fulfill your own purposes, you are simply spreading poor data.

So we see that the tendency to irony is not just useful in and of itself, but useful in helping prevent zombie brain infections. As lunchtime is nearly over, and I can’t think of more similarities, I’m stopping here to get something to eat.

[Exit Alex stage right, slouching, mumbling, “Must…eat…brains.”]

Originally posted on former personal blog UXtraordinary.com.