Welcome to the IdeaMatt blog!

My rebooted blog on tech, creative ideas, digital citizenship, and life as an experiment.

Entries in quantified self (19)

Friday
Jan282011

How to experiment: Guidelines from Stewart Friedman's "Be a Better Leader, Have a Richer Life"

[cross-posted to Quantified Self]

  1. Curiosity: An emotion related to natural inquisitive behavior such as exploration, investigation, and learning.
  2. Exploration: To travel for the purpose of discovery.
  3. Discovery: A productive insight.

I've been thinking of this triumvirate as essential characteristics of scientific inquiry - get curious about something, try out some different things to dig into it, see what you learn, and repeat. My personal interest in this, in addition to the tools and sites we share here on QS, is to figure out how specifically we navigate the process of curiosity, exploration, and discovery.

Taking a cue from Alex's summary of How To Measure Anything, Even Intangibles, I want to share an impressive work called "Be a Better Leader, Have a Richer Life" by Stewart Friedman (see below [1] for where to find it). I'll focus on the experimental aspects of his work and pull out some highlights related to process.

Friedman describes a four-step process, where each step relates to his four general domains of life - work, home, community, and self:

  1. Reflect
  2. Brainstorm possibilities
  3. Choose experiments
  4. Measure progress

The first step, reflect, is where you think about your priorities in each of those four domains and compare them to how you actually allocate your time and energy. This will identify conflicts that should guide your choices of where to start experimenting. (I think of this kind of goal-driven approach as "top down" experimenting, as distinguished from "bottom up" where you start from an observation that catches your attention, such as when I noticed I am moodier after drinking alcohol, and start self-experimenting from there.)

In step two you brainstorm possible experiments that will close the gaps identified in step one and bring you more satisfaction in life. The author stresses the importance of putting together a long list of small experiments. The author notes that keeping them small helps minimize risk and gets results quickly. I especially like his guideline that the most useful experiments feel like a bit of a stretch: not too easy and not too intimidating.

The third step is to choose which of the candidate experiments to perform, i.e., which are most promising and will improve your fulfillment and performance in his four dimensions of life. I liked his suggestion that the experiments be ones that would have a high cost of regret and missed opportunities if you didn't do them. He goes on to say that it's not practical to try out more than three experiments at once. Not only do experiments take effort, but in Friedman's experience two turn out to be relatively successful and one "goes haywire."

The final step is to measure progress. He has you develop a scorecard for each chosen experiment where you specify its life dimension, your goals for it, and how you'll measure success. Metrics may be objective or subjective, qualitative or quantitative, reported by you or others, and frequently or intermittently observed. He gives sample ones like cost savings from reduced travel, number of e-mail misunderstandings averted, degree of satisfaction with family time, and hours spent volunteering at a teen center. Friedman stresses the common wisdom that, like a scientist, the only way to fail with an experiment is to fail to learn from it, and metrics help ensure that doesn't happen. They give you hard data to analyze, and can teach you how to make better ones in the future.

Here's a sample experiment, courtesy of the BNET article below:

 

better-leader-example

Exercise three mornings a week with spouse.

Friedman gives plenty of advice beyond the four steps. In particular I like his description of the overall experimental approach:

"...systematically designing and implementing carefully crafted experiments - doing something new for a short period to see how it affects all four domains. If an experiment doesn't work out, you stop or adjust, and little is lost. If it does work out, it's a small win; over time these add up so that your overall efforts are focused increasingly on what and who matter most. Either way, you learn more about how to lead in all parts of your life ."

Finally, I love how he describes the value of the experimental mindset. One example is how framing an experiment as a trial can open doors that would otherwise be closed. Saying "Let's just try this. If it doesn't work, we'll go back to the old way or try something different" lowers resistance because the change seems less threatening. This is valuable because it's our nature to fear change. In fact my wife and I regularly use this with each other, such as when, during a kitchen remodeling when she got me to accept trying a vintage sink that initially, well, made me a little queasy. She pointed out that "It's just a little experiment" and that it was relatively reversible (standard plumbing placement meant a different one could be easily installed). The result: It worked out fine. In my case I suggested we experiment with a couch in the kitchen, an idea she despised but came to love. Give it a try!

Overall, I highly recommend Friedman's work. His book is my next read.

Resources

Sunday
Jan232011

Designing good experiments: Some mistakes and lessons

[cross-posted from Quantified Self]

Litmus paper, 1934, Merck Corporation

Like you I'm an avid self-experimenter, and I'm always on the lookout for things to change that will either a) improve me, or b) help me understand myself better so I can do a). I was comparing notes recently with Seth Roberts (his QS posts are here) about what experiments we've done, what processes we've used to do them, and what lessons we've learned from them. I thought I'd share some of my take-aways with you and ask what you've learned from your own self-experimentation.

Keep experiments specific and simple

A mistake I've commonly made in the past made is trying to track too many things at once. For example, a year ago I was terribly fatigued and decided to improve sleep quality. I tried a bunch of things [1] but I wasn't careful about keeping them separate, or stopping one before starting the next. The lesson is that the changes you make ("treatments") and the things you measure ("variables") should be simple and few. The general goal is to maximize the amount of information you get using the least amount of effort. This should tell you where to go next.

([1] I made changes like going to bed when I first felt tired, implementing a calming and regular nightly routine, eliminating caffeine, stopping using the computer from 9pm on, cutting out bright lights before bedtime, not eating or exercising right before, and taking drugs like Ambien and Xanax. Results: The first technique was, and continues to be, helpful, but time's erasure of the stress from a family emergency a year ago made the biggest difference.)

Know the type of your design

Though I've been experimenting on myself for many years, it was only recently that I understood the basic approach of testing things on myself. I've learned that most kinds of self-experiments are a type of back and forth process called "Reversal or ABA designs." From the Wikipedia article:

The reversal design is the most powerful of the single-subject research designs showing a strong reversal from baseline ("A") to treatment ("B") and back again. If the variable returns to baseline measure without a treatment then resumes its effects when reapplied, the researcher can have greater confidence in the efficacy of that treatment.

The idea as I understand it is straightforward, but it helped me to lay it out:

  1. Define the question you're trying to answer (e.g., "Is grinding during the day causing my tooth pain?"),
  2. Decide one thing that you're going to change (e.g., wear a night guard during the day),
  3. Decide at least one corresponding measurement you'll make (e.g., pain on a scale of zero to two),
  4. Start taking measurements for a while (you're in the first "A"),
  5. Implement the change and keep measuring (now you're in "AB"),
  6. Then cut out the change and continuing measuring until you're done ("ABA").

What you'll look for is whether your variable changes during the "AB" and "BA" transitions. If it does, you probably found something. If not, try something new.

For example, as hinted at above, one experiment I'm doing is working on reducing pain I have in a certain tooth. I've tested a number of things (including cutting out ice cream and acidic foods) and now I'm investigating the contribution to the problem my grinding might be making. In this case "A" is wearing a mouth guard at night (my baseline), and "B" is wearing it during the day too. I just finished the second "A," and my results were surprising: not much difference! (I'm now investigating what appears to be a diurnal cycle.)

A second, odder type of experiment is one that takes advantage of the subject's symmetry by testing two treatments in parallel. (I'm told that in statistics this is called "blocking," but I haven't found a good reference yet.) I recently used this to test different cold weather clothing for mountain biking by wearing different footwear on the left and right sides during the same ride. One result was that the order of sock/bootie/neoprene layers did not matter; left and right sides were not appreciably warmer. Another left/right example is what a friend did when she got poison ivy. She didn't know which over-the-counter treatment to use, so she tested one on each side - brilliant! (In this case, my friend found out something that no one could have told her - how well the treatments work for her. This highlights a fundamental truth that underlies much of our work: What matters most to me is not whether it works for everyone, but whether it works for me.) A final experiment is one I started last year when we painted half of our house with latex paint and the other half with an oil-based one. It's only been a year, but there is zero difference so far.

I am very curious to hear if you know of other types of designs besides these two.

Allow time for understanding to grow

A frustration I face with complex subjects (like human bodies, and our behavior and relationships) is that it can take time to figure out which variables are relevant to the problem, and how long they take to demonstrate their effect. For example, I've struggled with a mood disorder for years, and, like my insomnia experiments, I've tested out different remedies such as meditation and medications. However, it's not so straightforward when you factor in life events, stressors, and biochemical "weather." (I found the post QS Measuring Mood gave a great overview of the complexities of measuring mood, by the way.) Another example is changing diet. Maybe you're different, but it took months before I noticed certain results of a vegan diet. Or take business networking - how do you determine results when effects are indirect, especially with social investments like meeting people?

What I took away is the need to be patient by giving the results time to emerge, and to be flexible, say when you notice something useful and decide to change to a new line of experimentation. Or, alternatively, the situation where you decide to drop an experiment when it doesn't seem to be producing results.

Know that you're doing one

This might be obvious to you, but there were times when I've caught myself doing "stealth" experiments that, had I thought of them that way, would have resulted in my doing something very different. As an extreme example, I started dabbling with Twitter in 2007 and eventually got sucked into spending up to an hour/day on it. Fast forward two years and it hit me during a late night Twitter-fugue that I didn't actually know why I was using the bloody thing! What I should have done was to clarify what my goal was, what I'd be testing, and for how long I'd let it go. Again, because the results can be indirect, you might have to get clever in creating measures. For example, if you're trying to create business by forming relationships on Twitter, one thing to try is to simply ask prospective clients how they found out about you. Again, ideally you'd do an ABA design.

Doing something, even if it's imperfect, is better than nothing

This lesson was important to me because a major result of my analysis of over 100 experiments I've done in the last five years is that many were non-quantified. Not having numbers limited some forms of analysis and learning, but at the same time doing them helped me make lots of improvements in my life. For me what's crucial is the experimental mindset itself - looking at life with curiosity, and bringing mainly questions (rather than answers) to how we go about improving ourselves. Though it's a cliche, I think it's true that the only failed experiment is one that you didn't learn from.

Friday
Jan142011

Your life in data: Is it all about events and properties?

[cross-posted from Quantified Self]

Micrometer

I'm designing the data layer for my site, and it's got me thinking about the essentials of what it is exactly that we track when self-experimenting. Putting on my ontologist's hat I've come up with two kinds of things that I think cover anything a human would want to track (I might as well be provocative here), and I'd love to get your feedback on how well it makes sense. I'm excited to ask the question here because, frankly, I'm not sure who else would understand it. Here goes. It seems to me that all things measurable can be reduced to events and properties. (These are inspired by methods and attributes in object-oriented programming.) Events are things we do, like "ran 2-1/2 hours," "took 81 mg of aspirin," or "forced myself to laugh." They answer the question about the past, "What did you do?" Properties, on the other hand, capture some aspect of a thing at a particular moment, such as "the number of pimples on my face," "my location," or "whether it's raining." They answer the question about the present, "What is the state of the thing right now?" (An aside: If those two cover past and present, what role does the future play in this context? I wonder if it is theory, given its utility in prediction.) Let me give you an example. I'm currently experimenting with ways to reduce tooth pain (I'm a grinder), and I've tested a number of possible causes so far - sugar, acidic foods, airflow, food temperature, and (as of this moment) daytime grinding. Each of these has events I care about: "ate ice cream," "mountain biked," and "inserted/removed night guard." There's only one corresponding property I'm tracking, "amount of pain in tooth 24." Properties and events.

Another example is if you want to test the relationship between your happiness and sunny weather. In this case the event might be "exposed myself to sunshine," and the property would be "my happiness." The reason this scheme seems reasonable to me is because of the nature of scientific inquiry. The way I think of how we experiment is that we: 1) think about some aspect of our lives we want to explore, 2) dream up something small we can change, 3) do it while collecting measurements, and then 4) analyze that data to decide what effects the changes had. Notice that there are two things at play in #3: Actions that I take (or things that happen around me), and measurements. Viola - events and properties! Unfortunately, the word "measurements" in the last bit causes a problem. To me, measurement answers the question, "How much of it?", and that applies to both events and properties. If we take the events in my bruxism experiment, what I want to know about them is a) how many hours I biked, and b) how many hours I wore the night guard before going to bed. Obviously I can measure these. For properties, what's interesting is that, unlike in experiments, the measurements are intrinsic; all you can really do is ask how much of a property there is. (I haven't nailed this down, but I think this dichotomy results from the existence of events being of value. That is, we're primarily interested that they occurred at all. The quantity of them may or may not be useful. For example, I might not care how much I rode my bike, but simply whether I rode at all today.) So there you have it. I'm really curious:

  • What do you think of this analysis?
  • Would typical self-experimenters (if there is such a person) get it?
  • If you were designing an experiment, would thinking about measurements in this way help you?
  • How do existing tools chop the world up? In particular I'm not sure I've seen properties explicitly identified. (Note: I've listed the multi-purpose tools I know about in my answer to What are some alternatives to Daytum?)

[Image from bartmaguire] (Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter's journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org)

Friday
Dec242010

What will you track over the holidays?

[cross-posted from Quantified Self]

Alexander Alexeieff, color wood engraving for Pushkin's The Queen of Spades (London, Blackamore Press, 1923) A touchy-feely post this week, I'd love to hear your suggestions on meaningful things we might track during the holidays, now or whenever they are celebrated. Here are a few ideas I had in the social and health categories. What are yours?

  • # social events participated in
  • # laughs
  • # times you stopped and took a breath
  • # family stories told
  • # people you told how you feel about them
  • # smiles obtained
  • # offers of help to someone
  • # times you felt gratitude
  • # times you gave the gift of listening
  • # times went for a walk
  • # servings food
  • # alcoholic drinks consumed
  • # sweets consumed
Friday
Dec172010

Is There a Self-Experimentation Gender Gap?

[cross-posted from Quantified Self]

As I get to know the QS community and the wider life-as-experiment one, I've noticed something troubling. In some areas there seems to be more men participating in our work than women. In this post I'll try to identify the problem, suggest a couple of causes, and then get your feedback on what you think is going on and how we might improve things. (Note: I present this in the spirit of making our work accessible to everyone, and in hopes of getting a discussion going. I'm taking a bit of a risk here, so if I accidentally ruffle any feathers, please be generous and let me know so I can fix it up.)

Lab Technician for Girls Set, 1958

The problem

This first came to my attention during a conversation I had earlier in the year with a very bright friend I was collaborating with on my Edison project. While discussing who our possible users might be, she made the casual remark that self-tracking tools are a "guy thing." To back this up she pointed out that most of the experiments created in Edison are by men. She went on to wonder whether the very idea of treating life as an experiment appeals more to men than women. "What woman would want to look at relationships as experiments?" she asked.

She was being intentionally provocative to make the point, which worked because I've been thinking about it ever since. I didn't (and still don't) believe that there's anything intrinsic to our QS work that's not gender-neutral, but I think there are factors that go toward explaining what she noticed.

Let me give you a few very rough data points from asking around the community. Please note that I don't offer these examples as concrete evidence, but to make the point that, at least by some measures, there's a gender imbalance that we might need to be aware of.

  • QS comments: I looked back at the most recent comments here going back a few months. I found about 80, ruled out ones by me and my fellow contributors, and came up with an approximate male-to-female ratio of 80/20 (20% of comments were by women, the rest by men).
  • QS videos: I also reviewed the videos that've been uploaded to the QS Vimeo page and found the same approximate ratio: 80/20.
  • Boston QS Meetup: Eyeballing the last two meetings (both of which were excellent thanks to Michael Nagle), I estimated around a 90/10 ratio, men to women.

Possible factors

This got me thinking about what might be going on. Here are a few questions.

Tools vs. community: Could it be that having a gadget or tool focus selects predominantly for guys? My colleague suggested that while men are attracted to tools, women are more drawn in by community and collaboration. In an email conversation, Alex told me she'd talked a bit about this with a researcher who wondered whether men like tracking numbers and women like tracking thoughts/stories, like in diaries.

Topic of interest: For sites that are specific to one area of tracking, does the domain attract one gender over the other? I ran this past Alex, who said that at CureTogether ~2/3 of her members are women. In this reply she suggests a hypothesis:

Thanks for the question, Faren! According to a 2004 Kaiser report, more women are affected by chronic conditions than men. However, it is also possible that since we started with women's health conditions at CureTogether, we have attracted more women than men to join as members.

Gender and science: Above the level of tools and sites is the perspective of poking, prodding, and measuring as a way to go about the world. Is there an underlying social bias that keeps women away? In my Think, Try, Learn work I argue that the urge to discover is a fundamental characteristic of being human, but there continue to be disturbing gender barriers to women in science. For example, see the Boston Globe article Women, science, and the gender gap and The Daily Beast's Women in Technology: Is There a Gender Divide?

Initial population: How important to gender make-up is the seed group of people who first heard about the work? When I talked with Michael about this at the last QS Boston meetup, he mentioned that his network included many technical folks, a field which is still biased toward guys. For Edison I tapped my productivity blog, which for some reason appealed to men. And as Alex pointed out above, she started with women's health conditions.

Site design: This might be a little out there, but does the appearance of a site matter to gender participation? The controversial post He Said, She Said - Web Design by Gender, in spite of critiques, got me thinking about this. Beyond look-and-feel, I'm curious about how the interaction of quantitative and collaborative tools might influence gender usability.

My take

My belief is that the ideas and tools we talk about here on QS can appeal to anyone who wants to make his or her life better, regardless of occupation, gender, or background. If there are unnecessary barriers to someone getting involved, then I want to lower those. I'm also motivated because as the father of a 10 year old daughter I have a strong desire to give her tools to make her successful, including the perspective of an experiment-driven life. We've already applied this to things like social engineering (exploring ways to work with the principal to get a "hat day"), repairing things ("Let's try using duct tape!"), testing the foibles of human memory ("I have to floss again tonight? But I *know* I did it last night!"), and investigating cause and effect relationships (tracking her mood to see how it relates to needing a snack).

Questions for you

  • Do you think there's a significant gender gap in self-tracking?
  • If you're a woman, what barriers do you see in learning about QS or practicing it?
  • If you have a self-tracking product or site, what gender information have you learned about your customers or members? What do you make of it?