This paper was cool, because it spoke to a new trend that is actually about how one designs tools for developers and the like preforming highly technical tasks. It is hard or impossible to design well for these groups without understanding what they are doing. One of my favorite examples of this is "code bubbles" : http://www.cs.brown.edu/people/acb/codebubbles_site.htm.
One thing that caught my eye about the walk through with Jenny's project is that this reminds me so much of my own coding experience, and the kinds of things that I try to get done ad-hoc at the moment (through my web browser, textbooks, previous code, etc). The model in my head of how to use this came from XCode, and the way it helpfully auto-fills for you.
I also appreciated the level of detail and attention paid to various aspects - for example, knowing that some code posted to message boards would be buggy, and would have to be designed for.
As a side note, I had a hard time believing the statistics in the testing, just because there was such a small sample size, and it seemed like it was very dependent on how well developers knew Flex (14, or ~3/4 had used Flex for over a year, and only 12 used Flex for 25+ hours a week).
At the completion of the study, we conducted 30-minute
interviews with four active Blueprint users to understand
how they integrated Blueprint in their workflows. Based on
the interviews, we formed three hypotheses, which we
tested with the Blueprint usage logs. After evaluating these
hypotheses, we performed further exploratory analysis of
the logs. This additional analysis provided high-level insight
about current use that we believe will help guide future
work in creating task-specific search interfaces.
I have a theory that many designers develop "hypotheses" over the course of ethnographic interviews, whether consciously or not, and then use that to help direct future inquiries/interviews, and to use future interviews/inquiries to validate (or not) their hypotheses. Brandt's view is supplemental: developing hypotheses through interviews, and then testing this with data. Also, I am curious how the validation process works, and if it effects objectivity. Certainly, even with data, parts of this are subjective, and so how can we, as designers, better manage this process to get the best possible results?
In contrast to any other paper we read this quarter, Brandt's work relies heavily on data. How does this data get turned into useful design feedback? Did the numbers really drive design changes, or were the user interviews more effective at directing iteration? Despite the data, I am under the impression that the feedback from individual users had a greater impact on the design than did the aggregate data generated by thousands of users. This observation contains both a warning and an opportunity for designers:
1) Warning: Stories are more salient than statistics, so use them carefully. A good story from a user early on in the design process could dramatically change the direction of a product, but who knows whether that early feedback will generalize to the rest of the target audience?
2) Opportunity: If you're trying to explain your project to someone else, use a stories before statistics. Brandt did a great job of this in his paper. His scenario about the character Jenny helped me understand the power of Blueprint much more effectively than did the statistical results.
------------------
As a design tool, Blueprint gives designers an increased ability to manipulate the design medium (source code). As shown by the data, increased manipulability has lead to better outcomes in code quality. Interestingly, the improvement in product quality was not statistically significant. Would users in the Blueprint condition (if given the opportunity) explore more alternatives than those in the traditional browser condition?
Also, as possible future work, I would suggest facilitating the contribution of examples that are formatted in a way that is easily parsed by Blueprint.
@Andrew it's really interesting that you say that, I think one of the things designers struggle with is trying to balance these two things without undermining the significance of either. The way I've used it has been (a) use statistics to better drive interviews. Everyone should come into interviews with a plan (and then promptly forget it, but that's a different story), and statistics can help us make a better plan. (b) You use interview insights and validate them through statistics.
I think one of the mistakes in the paper was the fact that the statistics and the insights didn't match up perfectly. Was there a better way to measure if users had experiences like Jenny's?
@Nina, I agree – stats are great in lots of cases, especially for validation of things that are quantifiable like behavior. Thinking about it again, stats don't easily account for emotion or attitude. Some people make the simplifying assumption that behavior can serve as a proxy for emotion and attitude, but I think that's pretty risky. So, yes, as designers, we needs to take a balanced approach to our work.
@ Andrew - I do share your observation that this paper relied very heavily on data to drive design change? But I was wondering - in the previous examples, we repeatedly saw that design was more of a "fuzzy" undefined process, with lots of uncertainties and it relied on trying, failing and then trying another route, which is kind of like walking through a maze. These choices also require a lot of intuition, emotional intelligence and many other factors that cannot be quantified. Therefore how applicable or effective is this data driven approach? Are their special contexts in which this approach is better than the intuitive approach? What parameters do we have to look when choosing the appropriate path?
You don't have permission to comment on this page.
Comments (6)
Nina Khosla said
at 3:35 am on May 28, 2010
This paper was cool, because it spoke to a new trend that is actually about how one designs tools for developers and the like preforming highly technical tasks. It is hard or impossible to design well for these groups without understanding what they are doing. One of my favorite examples of this is "code bubbles" : http://www.cs.brown.edu/people/acb/codebubbles_site.htm.
One thing that caught my eye about the walk through with Jenny's project is that this reminds me so much of my own coding experience, and the kinds of things that I try to get done ad-hoc at the moment (through my web browser, textbooks, previous code, etc). The model in my head of how to use this came from XCode, and the way it helpfully auto-fills for you.
I also appreciated the level of detail and attention paid to various aspects - for example, knowing that some code posted to message boards would be buggy, and would have to be designed for.
As a side note, I had a hard time believing the statistics in the testing, just because there was such a small sample size, and it seemed like it was very dependent on how well developers knew Flex (14, or ~3/4 had used Flex for over a year, and only 12 used Flex for 25+ hours a week).
Nina Khosla said
at 3:35 am on May 28, 2010
I was also very intrigued by this:
At the completion of the study, we conducted 30-minute
interviews with four active Blueprint users to understand
how they integrated Blueprint in their workflows. Based on
the interviews, we formed three hypotheses, which we
tested with the Blueprint usage logs. After evaluating these
hypotheses, we performed further exploratory analysis of
the logs. This additional analysis provided high-level insight
about current use that we believe will help guide future
work in creating task-specific search interfaces.
I have a theory that many designers develop "hypotheses" over the course of ethnographic interviews, whether consciously or not, and then use that to help direct future inquiries/interviews, and to use future interviews/inquiries to validate (or not) their hypotheses. Brandt's view is supplemental: developing hypotheses through interviews, and then testing this with data. Also, I am curious how the validation process works, and if it effects objectivity. Certainly, even with data, parts of this are subjective, and so how can we, as designers, better manage this process to get the best possible results?
Andrew Hershberger said
at 12:33 pm on May 28, 2010
Numbers!
In contrast to any other paper we read this quarter, Brandt's work relies heavily on data. How does this data get turned into useful design feedback? Did the numbers really drive design changes, or were the user interviews more effective at directing iteration? Despite the data, I am under the impression that the feedback from individual users had a greater impact on the design than did the aggregate data generated by thousands of users. This observation contains both a warning and an opportunity for designers:
1) Warning: Stories are more salient than statistics, so use them carefully. A good story from a user early on in the design process could dramatically change the direction of a product, but who knows whether that early feedback will generalize to the rest of the target audience?
2) Opportunity: If you're trying to explain your project to someone else, use a stories before statistics. Brandt did a great job of this in his paper. His scenario about the character Jenny helped me understand the power of Blueprint much more effectively than did the statistical results.
------------------
As a design tool, Blueprint gives designers an increased ability to manipulate the design medium (source code). As shown by the data, increased manipulability has lead to better outcomes in code quality. Interestingly, the improvement in product quality was not statistically significant. Would users in the Blueprint condition (if given the opportunity) explore more alternatives than those in the traditional browser condition?
Also, as possible future work, I would suggest facilitating the contribution of examples that are formatted in a way that is easily parsed by Blueprint.
Nina Khosla said
at 9:32 pm on May 28, 2010
@Andrew it's really interesting that you say that, I think one of the things designers struggle with is trying to balance these two things without undermining the significance of either. The way I've used it has been (a) use statistics to better drive interviews. Everyone should come into interviews with a plan (and then promptly forget it, but that's a different story), and statistics can help us make a better plan. (b) You use interview insights and validate them through statistics.
I think one of the mistakes in the paper was the fact that the statistics and the insights didn't match up perfectly. Was there a better way to measure if users had experiences like Jenny's?
Andrew Hershberger said
at 11:53 am on Jun 1, 2010
@Nina, I agree – stats are great in lots of cases, especially for validation of things that are quantifiable like behavior. Thinking about it again, stats don't easily account for emotion or attitude. Some people make the simplifying assumption that behavior can serve as a proxy for emotion and attitude, but I think that's pretty risky. So, yes, as designers, we needs to take a balanced approach to our work.
Poornimaw said
at 8:04 pm on Jun 9, 2010
@ Andrew - I do share your observation that this paper relied very heavily on data to drive design change? But I was wondering - in the previous examples, we repeatedly saw that design was more of a "fuzzy" undefined process, with lots of uncertainties and it relied on trying, failing and then trying another route, which is kind of like walking through a maze. These choices also require a lot of intuition, emotional intelligence and many other factors that cannot be quantified. Therefore how applicable or effective is this data driven approach? Are their special contexts in which this approach is better than the intuitive approach? What parameters do we have to look when choosing the appropriate path?
You don't have permission to comment on this page.