• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!


Rob Ryan

Page history last edited by Rob Ryan 13 years, 12 months ago



Rob Ryan




About me: Rob is a junior in Symbolic Systems, studying Human-Computer Interaction. But then again, Rob has also declared: Human Biology, Mechanical Engineering, English, and Art History in the past, so there's really no telling how long HCI will last. This summer, Rob will be on campus, doing front-end web design work for his friends' new startup: Unmelt (currently in alpha).


Reading Responses


Date  Speaker  Response 
4/9  Larry Leifer 

"The urge for good design," Harry Bertoia said,  "is the same as the urge to go on living. The assumption is that somewhere, hidden, is a better way of doing things." It is no surprise, then, that those of us who focus on design are drawn to redesign the discipline; that we we assume that there exists a better science of design education.


And so I probably wasn't the only one who got excited, once I passed the abstract: papers that purport to re-conceive of design are sexy, largely because of the multiplier effect it could have on your work. What was I looking for, personally? Some acknowledgement that engineering education at Stanford is imperfect, or even broken: some reason I might explore for the disconnect at our University between "techies" and "fuzzies." The disconnect manifests as a culture of active doubt of the humanities. There exists a paramagnetism towards natural sciences, and an aversion to humanistic thought.


This disrespect -- the right-brain beating up on the left-brain -- is a daily struggle for designers. It is somewhat ironic that Leifer is espousing a project-learning approach; he's a Stanford veteran, and has certainly borne witness to our collective cultural shift from 20th century philosophizing to 21st century engineering. He frames his solution in terms of ambiguity: he wants to enrich a design team's result by encouraging convergent+divergent thinking cycles. The divergent half of this cycle (the "how many ways could this happen?") is a canonical left-brain question. It's the hallmark of a humanistic tradition that asks how to re-interpret literature, how to conceive of populations, how to enumerate emotion. The convergent tradition, the engineering tradition, asks: "which of these is the best? which is cheapest? which works fastest?"


Certainly, these are both valuable techniques for a design team, and it may be obvious to others that their raucous, and often unpleasant, opposition is what fuels good work. But in Leifer's work, I found little solace for those of us in design who mourn the estrangement of what feel like two equal partners in the cognitive load of design. Does Leifer's pursuit of ambiguous collaboration fix the problem? Not for me: but it is a decent start.

4/16  Harold G. Nelson 

“People think that design is styling. Design is not style. It’s not about giving shape to the shell and not giving a damn about the guts. Good design is a renaissance attitude that combines technology, cognitive science, human need, and beauty to produce something that the world didn’t know it was missing.”


— Paola Antonelli




I don’t know if I could have had a more pleasurable reading experience: Nelson’s opening shot across the bow of science and art is a powerful stimulus for any designer who has felt the tide come in for design. This new conception of a design culture: of a community of experts willing to expose themselves as a third dimension of academia -- is almost heretical. 


It may seem obvious, once it is said to a designer: that the synthetic realm that “you” work in is a different process tthan your education. Good design is obvious: good design education is not, yet. It has been long murmured to design students that they will be expected to respond to challenges in their professional life with the poise of an artist and the skill of an engineer. Is that really working?


The great design moments of our time are consistently those with no obvious genesis in either tradition: the humanities or art. Consider the iPod, which acknowledged (and indeed, drove) the formation of a young, musically-attuned, mobile culture. What about science in the late 20th century demanded the iPod? What about post-modernism demanded the iPod? Can we say that it leveraged the engineering acumen of a generation of computing research? Yes, surely. And could we also say that Jonathan Ive (and the res of his team) are fluent in Apple’s reconception of post-modernist color and line? Yes, certainly.


The iPod, and many other “well-designed” products, meet that unpercieved need. Good designs can be miniature persons, have a character, and speak to you in a fulfilling way. A good designer, then, is a strategist shuttling between these two worlds to weave a pleasant illusion. But that, as Nelson points out, isn’t sustainable -- if we want more iPods in our life, we have to acknowledge this guild of thinkers: the people who quietly catalogue the patterns of our species’ life, and step in to fold their ideas indelibly into our lives. 


4/23 Ed Catmull 


"You've achieved success in your field when you dont know whether what youre doing is work or play."

— Warren Beatty


It's beginning to look like humans are just not quite fit to work. Everything I've been reading, hearing, learning, and doing, is teaching me that I have been wound too tightly, or perhaps incorrectly, for the work I enjoy. Creativity, which is the lovechild of the athletic brain and the light heart, seems to only thrive when it is advocated for against a host of anathemae. Managers stamp it out by being overly conservative. Groups stamp it out by slinging personal attacks. Companies stamp it out by privileging one act of creation over another.


Catmull gives us a rubric for fighting our somewhat-natural tendencies against creativity. His focus on safe team-play, showing unfinished work, daily briefings, personal ownership of work, open communication, etc., seem sort of obvious to many of us. They're the hallmarks of the constellation of experiences we enjoy the most in our lives. Work is best when we are free to have fun with it.


I want to ask a different question, though -- why are we this way? If we can be effective designers in kindergarten, what is it that drives us to lose the democracy of playtime, and gives us the pallor of the corporate zombie. And if corporate life runs such risks, why has the Western world "succeeded"? Can we say that creativity is still fighting conservatism (in its many forms) to create its own ecosystem, and has found a niche large enough to ensure some measure of success? Catmull has a prescription, but I'd be interested in hearing the diagnosis.


4/30 Mitch Resnick 


@Andrew -- I have a good friend who is studying for Teach for America right now at the Bing School. We also happen to work together as supervisors at the Calling Center on campus, and she said something at work the other day that really responds to your third point. I was coloring a decoration for the Center's wall and she said, "I see that you're working very hard on that decoration."


I was struck by the stilted phrasing of her comment, and asked her why she said that -- "don't you like it?" I asked. She laughed, and apologized, "At the Bing School," she said, "we're not allowed to issue judgements on the creative work the kids do. We only say 'I see that you're doing a lot of work on that,' and that prompts them to comment and reflect on their own work. 


I'm curious about what she would say about Scratch. She's about to begin tenure at an elementary school -- maybe she can bring it along!


5/7 David Kirsh 


David Kirsh’s work has certainly come upon me at an opportune point in the quarter: our other HCI seminars have just finishing reviewing the Stanford HCI group’s results on prototyping. The takeaway from the Stanford work? That designers of all stripes benefit from a prototype of their design that can be hacked, modded, shared, learned from, etc. Kirsh astutely points out that the prototype can serve more subtle functions still, essentially summarizing the cognitive benefits of offloading process into the physical world.


Kirsh also raises an interesting assertion in the course of his opening: that representations will always all us to build arbitrarily complex structures. At first blush, that sounds alright -- but do we really want to swallow the notion that “there are cognitive things we can do outside our heads that we simply cannot do inside”?


In his last section, Kirsh acknowledges that this idea has some shaky foundations. Certainly we could allow that models give us physically persistent, shareable, rearrangeble, instantiations of our ideas, but are they strictly better than keeping it all in our heads? Kirsh cites the cases of Tesla, Hawking, and other savants -- humans with a preternatural mental ability to model systems mentally. Apart from these instances, Kirsh would argue, most of us are handicapped. Without an eidetic memory, or godly spatial reasoning, we could never internalize something as simple as an orrery. 


I’m reminded here very powerfully of the canonical story of computing and design thinking in the Valley: the design of the Macintosh IIc. In an interview with Jessica Livingston in the anthology Founders At Work, we see what happens to a human to elevate them to this supra-representational state of perfect flow:


Livingston: What is the key to excellence for an engineer?


Wozniak: You have to be very diligent. You have to check every little detail. You have to be so careful that you haven’t left something out. You have to think harder and deeper than you normally would. It’s hard with today’s large, huge programs.


I was partly hardware and partly software, but, I’ll tell you, I wrote an awful lot of software by hand (I still have the copies that are handwritten) and all of that went into the Apple II. Every byte that went into the Apple II, it had so many different mathematical routines, graphics routines, computer languages, emulators of other machines, ways to slip your code in and out of an emulation mode. It had all these kinds of things and not one bug ever found. Not one bug in the hardware, not one bug in the software. And you just can’t find a product like that nowadays. But, you see, I had it so intense in my head, and the reason for that was largely because it was part of me. Everything in there had to be so important to me. This computer was me. And everything had to be as perfect as could be made. And I had a lot going against me because I didn’t have a computer to compile my code, my software.

 In other words, he had no representational crutch. Did Wozniak suffer from his design process? Almost certainly. Would we have rather had the first Mac any other way? Not by a long shot.


5/14 S. Joy Mountford 

It's by pure chance that Jeffrey Heer came to my CS376 class this week to speak to us about visualization technology. (Either that, or design education at this school is a conspiracy). I was both surprised and gladdened that Dr. Heer arrived; until then, visualization at Stanford wasn't very high-profile for me.


So it comes as another pleasant shock to be reviewing Dr. Heer's work this week. Protovis provides a declarative markup for articulating visualizations for the web. As someone on a start-up dev-team, that's a highly desirable good -- the alternative frameworks for a comparable solution range from the design-impoverished (Google Visualization API) to the beautifully inexact (Adobe Illustrator). Web designers have a palpable hunger for the next generation of tools, ones that will let them manipulate pixels with all the fluidity of CSS.


Granted, this need is only ten-or-so years old: research used to be quite satisfied with R graphics, and the business world seems fattened on Excel bar charts (and that nouveau-Ariel typeface, Calibri). But with the data deluge on its way, there will be a growing ecology of some data visualization language on the web. When news organizations, bloggers, and web developers all turn to tame this data, they will eventually wrestle out a standard for this mode of expression, as they did for markup (HTML) and video (Flash). 


And this brings me to the part I liked the most about the reading. When users are all empowered to share their visualizations & data, everybody wins. (You can even save a government $3.2 billion USD). If there is a pervasive, open-sourced standard that allows us to harness the Twitters, the SMS gateways, the Youtube hits -- we might have something flexible enough to show us where we're headed. Or, failing that, we'll all at least be looking at the same sort of map. Standardized, simple toolkits FTW.


5/21 Jodi Forlizzi 

This week's paper, concerning J. Forlizzi's Snackbot project, is simply puzzling. I'm not sure how to interpret the context in which Forlizzi et. al would be producing a project summary of a snack-delivering robot. At first, I grew excited for my friends in robotics -- someone had solved indoor navigation! --  but it turned out to be only "semi-autonomous." Huh.


Clearly, this 'paper,' is operating in a different modality: it seems more like a milestone document for a grant project. If so, then I think what this paper taught me most is about the differing traditions of research between these two Universities. The comparatively ontology-heavy (relative to Stanford) language and enumeration (e.g. -- the "three types of cultural model") have a distinctly foreign feel. There seems to be something serious brewing between design and HCI researchers: this is the second CMU professor to assert a distinctly third-party role for design. This document contains an intensely ethnographic and humanistic observation. The models are free from many of the constraints imposed on the typical Stanford study -- what does this say about differing traditions of research?


Now that I've realized this document is an exemplar of another sort of endeavor, I hesitate to render a judgement on this effort. In trying to imagine the trajectory of this project, I'm really only comfortable critiquing the end result of such an effort. I'm fascinated to see this talk, then -- it should be a bit uncomfortable.



Final Paper



(A .pdf copy of my final report is available for download here)





Comments (3)

Nina Khosla said

at 4:46 am on Apr 16, 2010

Your comment about Leifer's work is interesting, definitely, but one of the things that's left out is the active "doing" part of design. It's true that engineering design at Stanford is focused tremendously on this "doing," without a lot of "reflection," but much frustration comes from designers without the ability to actually create the changes they want to make. It's that doing that makes design different from pure analytical thought or creativity. I believe that's why design has so often attached itself to the engineering disciplines.

Andrew Hershberger said

at 12:58 pm on Apr 17, 2010

Rob-- I like your page layout. I'm updating my own accordingly!

Steven Dow said

at 2:43 pm on Apr 26, 2010

Rob -- make sure to cross-list your primary posts under the Week XX pages so they get full opportunity for group discussion.

You raise a really interesting question that I think we should debate next week with Mitch Resnick: what happens to our creativity after kindergarten? My hunch is that "conservatism" creeps in much earlier than corporate life. Perhaps corporate life perpetuates the reserved behavior that gets hammered into us throughout a formal education. The western world is not all conservative—the creative spirit certainly lives on in entrepreneurial circles.

You don't have permission to comment on this page.