Welcome to the IdeaMatt blog!

My rebooted blog on tech, creative ideas, digital citizenship, and life as an experiment.

Entries in quantified self (19)

Saturday
Jul022011

Keeping motivated in your self-tracking

Measuring West I recently received an email from someone having trouble keeping up with her experiment. While there is lots of general advice about discipline and motivation, this got me thinking about how doing personal experiments might differ. Following are a few brief thoughts, but I'd love to hear ways that you keep motivated in your quantified self work.

The desire to get an answer. The main point of an experiment is to get an answer to the initial question. "Will a Paleo diet help me manage my weight?" "Does talking less bring me closer to my kids?" Maybe the principle at play is that experiments which motivate start with great questions.

Built-in progress indicators. If you've set up your experiment well, you should have measures that come in regularly enough to keep you interested. This is assuming, of course, that you care about the results, i.e., that you've linked data and personal meaning (see below). But unlike other types of projects, maybe we can use the periodic arrival of measurements to stimulate our motivation, such as celebrating when new results appear.

The joy of satisfying a mental itch. Curiosity is a deep human motivation, and experiments have the potential of giving your brain a tasty shift - such as when you are surprised by a result. I especially like when a mental model of mine is challenged by a result. Well, sometimes I like it.

Sharing with like-minded collaborators. At a higher level of motivation, experimenting on yourself is an ideal framework for collaboration with folks who are either 1) interested in your particular topic (e.g., sleeping better or improving your marriage), or 2) are living an experiment-driven life. It is encouraging to get together with people to share your work, and to receive support, feedback, and ideas. Of course it feels good to so the same for them.

Desire to make a change. Finally, if we come back to why we experiment, there should be a strong self-improvement component to what we are tracking. My argument is that, ultimately, it's not about the data, but about making improvements in ourselves for the purpose of being happier. If the change you are trying is not clearly leading that direction, then it might make sense to drop it and try something more direct. Fortunately, with self-experimentation there is usually something new you can try.


Underlying all of these, however, is the fact that the work of experimentation takes energy. Every step of an experiment's life-cycle involves effort, from thinking up what you'll do (creating a useful design), through running the experiment (capturing and tracking data), to making sense of the results (e.g., the "brain sweat" of analysis). Given our crazy-busy lives, there are times when we simply can't take on another responsibility. So if you find yourself flagging and losing interest in one of your self-experiments, then maybe that is itself some data. Thoughts?

[Cross-posted from Quantified Self]

Sunday
Jun122011

What makes a successful personal experiment?

Contraption

As I continue trying to stretch the concept of experiment so that a wide audience understands applying a scientific method to life, I struggle with defining success. While the trite "You can always learn something" is true, I think we need more detail. At heart is the tension between the nature of experimentation's trial-and-error process (I prefer the term Edisonian approach) - which means outcomes are unpredictable - and our need to feel satisfaction with our work. Here are a few thoughts.

Skillful discovery. Rather than being attached to a particular outcome, which we have limited control over, I've found it's better to focus on becoming an expert discoverer and mastering the process of experimentation. Because you have complete control over what you observe and what you make of it, you are guaranteed success. Fortunately, there's always room to develop your investigatory skills.

Fixing the game. At first it might seem contrived, but carefully choosing what you measure can help implement a scientific perspective on success. For example, instead of framing a diet experiment as "Did I lose weight?," it is more productive to ask "How did my weight change?" The former is a binary measure (losing weight = success, not losing = failure) and one that you don't necessarily have control over. After all, you are trying an experiment for the very reason that you don't know how it will work out. The latter phrasing is better because it activates your curiosity and gives you some objectivity, what I call a "healthy sense of detachment."

Improving models. As essentially irrational creatures, we run the risk of not questioning what we know. Updating our mental models of people, situations, and the world helps us to be more open to improvements. And the leading edge of that is the conflict between expectation (predicted outcome) and reality (actual results, AKA data). The quantified way to work that is by explicitly capturing our assumptions, testing them, taking in the results, and adjusting our thinking as necessary. This also leads to better predictions; from The Differences Between Innovation and Cooking Chili:

Of course, all of the experimental rigor imaginable cannot guarantee success. But it does guarantee that innovators learn as quickly as possible. Here, "learn" means something specific. It means making better predictions. As predictions get better, decisions get better, and you either fail early and cheap (a good outcome!) or you zero in quickly on something that works.

Getting answers. Another way to guarantee success is by going into an experiment with clearly formulated questions that your results will answer. Structured correctly, you know you will get answers to them. I think of it as regardless of what happens, you have found something out. (Hmm - maybe thinking of the process as active discovery is a richer concept than the generic "you learned something.")

Designing for surprise. If the product of your experiment was not very surprising, then maybe you should question your choice of what you tried. Exciting experiments probe the unknown, which ideally means novelty is in store. Fill in the blank: "If you're not surprised at the end of your experiment, then __."

Zeroing in. Because we usually dream up experiments with a goal in mind, chances are we come out the other end having moved some amount in the direction of attaining that goal. Progress is a success, so give yourself a pat on the back.

Taking action. Finally, each experiment is a manifestation of personal empowerment, which is a major success factor in life. While health comes to mind (do difficult patients have better results?), I think generally the more we take charge of our lives, the closer we get to happiness.

What do you think?

 

[Cross-posted to The Quantified Self]

Monday
May162011

Personal Development, Self-Experiments, and the Future of Search

[Cross-posted to Quantified Self]

Confusing Traffic Sign, Boston MA We experiment on ourselves and track the results to improve the way we work, our health, and our personal lives. This rational approach is essential because there are few guarantees that what works for others will work for us. Take the category of sleep, for example. Of the hundreds of tinctures and techniques available, clearly not all help everyone, or there would be exactly one title in the sleep section of your bookstore, called "Sleep," and no one could argue about its effectiveness. Treating these improvements experimentally, however, requires a major shift in thinking.

But being human isn't that simple. There are variables and confounding factors that mean you have to take matters actively into your hands if you want to really know what's personally effectual. That's why what we do here is so exciting. Instead of accepting common sense, we take a "prove it to me" approach and work to find out for ourselves. Operating from this basis, rather than faith, is more effective in the long run. (It's why we use science to understand the world, rather than astrology or phrenology, for example. Just look at what we've accomplished.)

As I tried to say in Making citizen scientists, this is heralding a move from citizens-as-helpers to true citizen scientists - people who get genuinely curious about something and decide to test things out for themselves, rather than simply trusting what others say will work. If we expand that vision five or ten years in the future, I think there could be a major shift in how we search for ways to improve ourselves, and that's what I want to share here.

Picture that you have something you're trying to change, and you want to find starting points, especially what's worked for others. Ideally you know someone who can point you in the right direction (medical professionals come to mind), but rarely do our social and professional circles cover everything we want to improve. Normally we are on our own, and must sit down and courageously cast ourselves into the vast sea of the web. What do we find? An insane number of hits mysteriously organized by clever algorithms. Again, take sleep. My search for insomnia yielded over three million hits. How do we decide how to use these results? I think the fundamental issue is of trust. How can I trust what I read if there is nothing backing it up? Google is great for some things, but for self-improvement, what I want isn't necessarily what's popular (let's face it, the popular kids at school weren't necessarily the smartest - spoken from experience). What I want is something that's a reasonable starting point. That is, something that has a high likelihood of yielding useful information that moves me quickly in a helpful direction. (My scientific colleague calls this exploring the "search frontier.")

Instead, what we have is a immense collection of definitions, blog posts, news articles, how to's, marketing literature, product reviews, fringe nuttiness, and the like. In other words, a Wild West of self-improvement. Exciting, dangerous, risky, unproven, and loaded with potential.

What's closer to what we want are the discussion board threads and blog comment exchanges where people shared what they tried and what has and hasn't worked. After a quick search, two examples that came up in the sleep realm are at talk about sleep and iVillage. However, we have three problems. First, my search for insomnia discussion boards still generated almost three million hits. That means we don't know where the quality work is taking place. Second, and far worse, is that the way people have gone about their search for solutions is rarely principled. This is because applying a scientific approach to self-help, as I tried to explain above, is still rare. (Want to test that? Just try to tell someone why you're reading this site, and watch the confusion on his face. "You're tracking what? But WHY?") With questionable methodologies come questionable results. Finally, if there truly are well-run experiments, they are scattered throughout the site on various threads, the data is not likely in a form we can analyze, and it's very hard to find who else has tried the experiments and what they discovered. In other words, the knowledge, experiences, and results of everyone's hard work isn't structured or centralized. And that's a massive waste. personal-informatics-venn-diagram

Now imagine that there is a community of self-experimentation tools that meet the three characteristics I outlined in my experiment-driven life talk (my original post is here): Broad, Social, and Scientific. On these sites will be records of millions of past and in-progress experiments being performed daily by thousands of citizen scientists, both individually and collectively. They will be structured to expose what folks did specifically, how the process went for them (in some ways as important as results themselves), what the data was, and how they interpreted it. And they will have targeted search tools to find experiments in a variety of ways - topic, treatment type, ratings, etc.

Visualize yourself going to one of those sites and searching folks' work for your topic. Assuming they're structured reasonably, you could find something marvelous: Actual personal experiments, around your situation, with their data and conclusions. Wow! This should allow you to get a summary view of the things people have tried, who else tried it, and what they learned. In other words, it would be a data-driven entry to deciding what I should start trying.

Ultimately, I envision this moving towards a kind of personal development "general store" where instead of facing an intimidatingly large self-help section in your bookstore (i.e., the big-box help-yourself model), you come to a desk with one person sitting and asking, "How can I help?" You tell her your topic and a little about yourself, then ask for advice. The clerk, who has read all of the books on the subject, answers, "For someone like you, the most effective experiments were ..." and lists five or six to look at. I think of it as a kind of intelligent, experiment-based search engine that factors in personal data, demographics, and topic (maybe as simple as keywords, for starters) and serves up ballpark suggestions.

What do you think? My ideas are still developing, so this is still a little rough, but I honestly believe we could build such an ecosystem. Are you with me?

Friday
Apr152011

Micro Experiments

getting a measure

[Cross-posted to Quantified Self]

What's the smallest thing you've tracked that had a short turnaround time but generated useful results? I've noticed that the kinds things we try in the Quantified Self community are often longer-term experiments that seem to be a week or two long at a minimum. I think this is primarily due to the effects of what we try need time to emerge. (This brings up the issue of how much value there is in investigating subtle results, which came up at our recent Boston QS Meetup - recap here.)

However, as I work to adopt an experimental mindset about life, I've noticed these efforts can vary in scope, duration, and complexity. Because interesting things happen at extremes, I've been exploring the very smallest class of activity, what I call micro experiments. I've found that trying little things like these is a great way to test-drive treating things as experiments, and maybe offer the chance for non-QS'ers to dip their toes in the idea of tracking on a tiny scale. (Of course you shouldn't risk shortening your life over any.) Researching the idea didn't turn up much, though Micro-Experiments and Evolution was stimulating.

Here are some examples I've tried and their results. Are they true experiments? Are they useful? I'm curious to know what you think.

Jing: I tried using Jing, a free tool for doing short screencasts, to explain a bug I found in my site. I usually write them up, but because it was complex, it would have taken a lot to explain it. Instead I created a four-minute screencast, emailed the link to my developer, and measured the results. Conclusion: Worked great! Time to record: 4 minutes. His understanding of the problem: High. Enjoyment level of trying a new tool: Fun.

Testing expectations: Left unchecked, I tend to be pessimistic and anxious, which I continue working to improve. Here's a technique I stumbled on that works well in micro experiment form. The idea is to treat your expectations as a model, make your assumptions and predictions explicit, then put them to the test. I applied it to two difficult phone calls I had scheduled, and found that my expectations were way off. In one case I was asking a fellow writer for a favor (mentioning an ebook I created), and instead of turning me down (my working model), he was happy to help. The other was a sales call in my last career to a prospective client, which I expected to go swimmingly. Instead it was a disaster! After analyzing what happened and comparing it to my model, I formed a couple of new ideas on how to do future ones. Surprisingly, the minute I thought of these as an experiments and wrote down my expectations, I felt immediate relief before the calls.

Pay for someone's parking: As a touchy-feely micro experiment, I was standing in line to pay for parking at a garage, and on a lark I decided to pay the next person's fee (it's almost always $0.50). I didn't know how they'd react (find it odd and refuse, for example), but the result: Evident happiness level of subject: High (I got a nice smile). My feeling: Walked away with a lighter step.

Disabling email: I continue to struggle keeping email from sucking my time and attention, so I tried disabling my email program for a day. This email vacation was helpful, but surprisingly uncomfortable. Not being able to monitor it clearly indicated a bit of an addiction. I didn't end up adopting it.

Decisions and glue: I sometimes stress about getting something new perfect the first time. Yes it's unrealistic, but that's the brain I'm stuck with. Treating the decision as a micro experiment helps me enjoy things more. For example, I had to repair two broken lawn chairs at home, and couldn't decide which of two glues to try. Then I realized this was a natural parallel type of experiment, and tried them both, one per chair. Result: Gorilla glue worked far better than the GOOP. Trivial? Maybe, but next time I don't have to wonder.

Not eating before exercise: Eating breakfast is commonly considered important, so I wondered what would happen if I skipped eating all morning then mountain biking at 1pm for an hour. Result: My performance was just fine, but I was hungry afterwards! Now I don't worry so much if I'm pressed for time.

Getting a bank fee waived: My wife needed a document notarized, so I brought her to the mega-bank where I was forced to do business for a time. The teller said she couldn't do notarize it because my wife wasn't listed on my account. In a bold (for me) move I did a social experiment by asking for the manager, who ended up OK'ing it, no problem. I was a little embarrassed until I thought of it experimentally.

Chocolate skin, cranberry sauce: There are lots of ways to experiment in the kitchen; here are two micro experiments I tried. First, I drink hot chocolate every morning (melt the expensive dark stuff into milk) and it sometimes develops a skin on top. (Hey - I discovered pudding!) To avoid that, I tried putting the heat on high and stirring constantly, instead of my usual medium heat with less stirring. The question was whether heat/time would affect skin forming. Result: ~50% reduction. As a second example, we had some leftover cranberries (I live in New England) and I wanted to make a sauce, but I was too lazy to follow a time-consuming recipe. Instead I microwaved a handful of them in a bowl with a little orange juice and honey. Result: An explosion of flavor. (Literally - it blew up while cooking.) Edibility was marginal.

Friday
Apr012011

Quantified Self Boston Meetup #5, The Science of Sleep: Recap

[Cross posted to Quantified Self]

QS Boston Meetup #5 was held on Wednesday on the topic "The Science of Sleep," a subject that comes up here regularly. The event was major success and, to my mind, demonstrated powerfully the potential of the self-experimentation movement and the exceptional people making it happen. Here is a brief recap of the evening, with my comments on what was discussed. A big thanks to Zeo for their generous support of the meeting, QS Boston leader Michael Nagle, and sprout for hosting the event.

Experiment-in-action: A participatory Zeo sleep trial

Michael put the theme into action uniquely by arranging for a free 30-day trial of Zeo sleep sensors to any members who were interested in experimenting with it and willing to give a short presentation about their results. Over a dozen people participated, and the talks were a treat that stimulated lots of discussion. I thought this was an excellent use of the impressive members of this community, as the talks demonstrated.

Steve Fabregas

Zeo research scientist Steve Fabregas kicked off the meetup by explaining the complex mechanisms of sleep, and the challenges of creating a consumer tool that balances invasiveness, fidelity, and ease of use. He talked about Zeo's initial focus (managing Sleep inertia by waking you up strategically), which - in prime startup fashion - developed into the final product. Steve also gave a rundown of the device's performance, including the neural network-based algorithm that infers sleep states from the noisy raw data, something he said that even humans have trouble with. There were lots of questions afterward, including about their API and variations in data based on age and gender. All in all, a great talk.

Sanjiv Shah

Sanjiv started out the sleep trial presentations with a lively talk about the many experiments he's done to improve his sleep, including a pitch-black room, ear plugs, and no alcohol or caffeine. But the biggest surprise (to him and us) was his discovery of how a particular color of yellow glasses, worn three hours before bed, helped his sleep dramatically. This is apparently based on research into the sleep-disturbing frequencies of artificial light. He shared how wearing these also helped reduce jet lag. The talk was a hit, with folks clamoring to know where to get the glasses. I found this page helpful in understanding the science. (An aside: If you're interested in trying these out in a group experiment, please let me know. I am definitely going to test them.)

Adriel Irons

Adriel studied the impact on weather and his sleep (via the Zeo's calculated ZQ) by recording things like temperature, dew point, and air pressure. He concluded that there's a possible connection between sleep and changes in those measures, but he said he needs more time and data. Audience questions were about measuring inside vs. outside conditions, sunrise and sunset times, and cloudiness.

Susan Putnins

Susan tested the effect of colored lights (green and purple) on sleep. Her conclusion was that there was no impact. As a surprise, though, she made a discovery about a the side-effects of a particular medication: none! This is a fine example of what I call the serendipity of experimentation.

Eric Smith

Eric tried a novel application of the Zeo: Testing it during the day. His surprise: The device mistakenly thought he was asleep a good portion of his day. He got chuckles reflecting on Matrix-like metaphysical implications, such as "Am I really awake?" and "Am I a bizarre case?" His results kicked off a useful discussion about the Zeo's algorithms and the difficulty of inferring state. Essentially, the device's programming is trained on a particular set of individuals' data, and is designed to be used at night. Fortunately, the consensus was that Eric is not abnormal.

Jacqueline Thong

Jacqueline finished up the participatory talks with her experiment to test whether she can sleep anywhere. Her baseline was two weeks sleeping in her bed, followed by couch then floor sleep. Her conclusion was that her sleep venue didn't seem to matter. One reason I liked Jacqueline's experiment is that, like many experiments, surprises are so rich and satisfying. Think bread mould. She said more data was needed, along with more controls. Sadly, she wondered whether her expensive mattress was worth it. Look for it on eBay.

Matt Bianchi

Matt Bianchi, a sleep doctor at Mass General, finished out the meetup with a discussion of the science and practice of researching sleep. Pictures and a description of what what a sleep lab is like brought home the point that what is measured there is not "normal" sleep: 40 minutes of setup and attaching electrodes, 200' of wires, and constant video and audio monitoring make for a novel $2,000 night. He said these labs give valuable information about disorders like sleep apnea, and at the same time, what matters at the end of the day is finding something that works for individuals. Given the multitude of contributing factors (he listed over a dozen, like medications, health, stress, anxiety, caffeine, exercise, sex, and light), trying things out for yourself is crucial. He also talked about the difficulties of measuring sleep, for example the unreliability of self-reported information. This made me wonder about the limitations of what we can realistically monitor about ourselves. Clearly tools like Zeo can play an important role. Questions to him included how to be a wake more (a member said "I'm willing to be tired, but not to die sooner,") to which he replied that the number of hours of sleep each of us needs varies widely. (The eight hour guideline is apparently "junk.")

Matt's talk brought up a discussion around the relative value of exploring small effects. The thought is that we should look for simple changes that have big results, i.e., the low hanging fruit. A heuristic suggested was if, after 5-10 days, you're not seeing a result, then move on to something else. A related rule might be that the more subtle the data, the more data points you need. I'd love to have a discussion about that idea, because some things require more time to manifest. (I explored some of this in my post Designing good experiments: Some mistakes and lessons.)

Finally, Matt highlighted the importance of self-experimentation. The point was that large trials result in learning what works for groups of people, but the ultimate test is what works for us individually. (He called this "individualizing medicine.") This struck a chord in me, because the enormous potential of personal experimenting is exactly what's so exciting about the work we're all doing here. All in all, a great meetup.

[Image courtesy of Keith Simmons]

 

(Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter's journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org)