Like I mentioned a few days ago, Experiment 2 of my dissertation project -- Project 15 -- is finished with data collection. I've been slowly working my way through the data (have been busy with some other projects) and I like how things are looking.

Just to summarize, we have people come into the lab and ask them to learn 100 words from semantic categories. After a distractor task, we give them a test on 300 items: the 100 targets (words they studied), 100 related lures (unstudied words from studied categories), and 100 unrelated lures (unstudied words from unstudied categories). On this recognition test, one of the 300 words is displayed and subjects respond "old" or "new." Following this judgment, a 0-100 confidence rating is provided. Last, for words to which subjects responded "old," they make a remember/know/guess judgment. A remember judgment is provided when subjects can recollect the episode of the word's prior presentation. A know judgment is provided when subjects can't recollect the episode, but know that the word was presented. Last, subjects respond guess when they were just guessing the word is old.

Between-subjects confidence-accuracy correlations

The next thing to do is investigate the confidence-accuracy correlation as a function of remember, know, or guess. We can do that with scatterplots. Here we go:

Using the terminology we used in DeSoto and Roediger (2014) , these scatterplots depict the between-subjects correlation, which indicates the degree to which confident subjects are more accurate. And what these data show is that confident subjects are similarly more accurate when responding remember or know, but there's not much relation in the guess responses.

Between-events confidence-accuracy correlations

And here's another one:

Again using prior terminology (i.e., DeSoto & Roediger, 2014), these scatterplots depict the between-events correlation, which indicates the degree to which items responded to with greater confidence are also responded to with greater accuracy. These data, like the prior one, show that increases in remembering are not substantially more predictive than increases in knowing. (But, like before, guesses are guesses.)

02/28/14; 11:36:34 AM

Dave Winer alerted me to the Knight News Challenge, a source of funding designed to strengthen the Internet for free expression and innovation. I've been thinking for a few hours now about ways a project done with Fargo might foster expression and innovation.

One of the big strengths of Fargo, in my opinion, is that it enables quick entry of organized information that can be published in an open way (i.e., via OPML and HTML). This makes it a powerful tool for scientists. Behavioral researchers like myself spend a great deal of time (1) writing and debugging experiment programs and software, (2) conducting data analyses with different statistical programs (e.g., SPSS) and programming languages (e.g., R, Python), and (3) writing and disseminating the resulting findings. To do all of these steps well (and in an open, reproducible way), good documentation is a necessity. Fortunately, like I've said, Fargo enables good documentation.

So an initial idea is to build services that strengthen Fargo as a means for academic and research documentation. Right now I am visualizing this as a "Fargo for academics," but of course, Fargo is already for academics (and programmers, poets, etc.; everybody else), so when I say "Fargo for academics," I mean that in an abstract way.

One tangible link/possibility: I have become an interested follower of the Center for Open Science (COS) stationed in Charlottesville, VA. The COS built and supports a tool called the Open Science Framework, which encourages open documentation of research and other academic collaborations. Right now the Open Science Framework uses a wiki-like system to record researcher notes. This system might be enhanced considerably by a connection to a tool like Fargo. It would be really neat to link up Fargo and the Open Science Framework for better research documentation, where you publish a note in Fargo and the OPML gets sucked up by the Open Science Framework.

So that's one fuzzy idea. Since I don't much know the folks at the COS (I do have a few connections, though), some more conversation there would be required. Since two of the six implementation objectives of COS involve strengthening infrastructure, though, I bet they'd be into it. The end goal would be to use Fargo to strengthen the internet for expression and innovation through open science -- something that'd be good for all of us.

02/27/14; 02:45:46 PM

A super neat Psychological Science paper we're reading for lab meeting next week. It makes a pretty basic claim, but one that's helpful to think about. Here it is: The likelihood we'll need a particular memory is a function of the frequency with which we've needed (or have experienced) the memory in the past. For example, I see my officemate Pooja pretty regularly, and interacting with her makes me think of her name. The chance that I'm going to need to remember her name over the next few days, then, is probably pretty high.

This has implications for forgetting. Specifically, as a function of prior exposure, it'll take me a long time to forget Pooja's name, too (in fact, that'll probably never happen, although I suppose there could be a slowdown in the rate at which I access her name when I'm 50 or something). Now on the other hand, the name of someone I've met just once or twice -- say, a sixth year PhD student when I entered graduate school -- was rarely used and likely quickly forgotten.

The overall idea here is that our memory system is adaptive (and almost Bayesian in nature): Prior experience is predictive of future demands. So when Becky says, "Andy, you never remember when I'm going out with my friends," the scientific response is, "There's very little cost associated with forgetting that information, so I don't remember it." But see, then I get in trouble, which increases the cost of forgetting. As a result, I'm a bit better at remembering when she's going out with friends.

Bottom line is -- a neat paper.

02/27/14; 12:14:15 PM

For fun this morning I opened a "pop-up" coffee shop for an hour. I'd say it was a reserved success -- I forgot one Aeropress part at home so the offerings were more limited than I would have liked. Thanks to everyone who came, and see you back next month!

Here's a photo of the menu:

02/27/14; 11:03:18 AM

I am trying to fit some forgetting curves to my cognitive data and could use a little bit of help. The extent of my knowledge currently is clicking the "trendline" button for a particular data series and going from there. What do I do, though, if I want the average forgetting function across a number of different data series? How does one arrive at an average function?

If you have any thoughts, feel free to leave them in the comments!

02/25/14; 11:10:37 AM

Meghan, Allison, and I have collected 68 subjects for Project 15 E2. This is great because it means that we have more than enough for the final set of 64 subjects, the number we decided to run before the experiment began.

Looking at the data, we have a few issues. Subjects 1, 9, and 14 are missing data due to internet connectivity issues during the pilot phase of the study. As a result, we'll remove them from the study. Next, two subject 58s were collected for some reason. We'll only use the data from the first one. (On second glance, 58 just made it into the data sheet twice, so I deleted that person's data from the file.)

This leaves us with a fresh dataset of 64. These data can be found in the "Working Data" tab.

Once we have the working data, we can begin calculating the basic statistics. The first mission is to get the overall "old" rates for the three different item types: targets, related lures, and unrelated lures.

The hit rate for targets (M = .77, SD = .15) appears greater than the false alarm rate for related lures (M = .34, SD = .17) appears greater than the false alarm rate for unrelated lures (M = .14, SD = .13).

The next thing to do is to get average confidence ratings for "old" responses for the three different item types.

Average confidence when subjects responded "old" to targets (M = 80, SD = 11) appeared greater than average confidence when subjects responded "old" to related lures (M = 54, SD = 14), and that was greater than when subjects responded "old" to unrelated lures (M = 47, SD = 20). Interestingly, people seem to be a bit less confident responding to lures than in my previous research (i.e., DeSoto & Roediger, 2014).

And here's the table for the remember/know/guess data:

02/20/14; 02:20:08 PM

Freshly published online this week is a little how-to I wrote a while back about collecting confidence ratings (or similar ratings) in your cognitive psychology experiment. You can get a copy of it by clicking here (academic use only, of course). It's published in SAGE research methods cases (thanks to Yana Weinstein for cluing me in to the opportunity). I don't know if they're making a print copy at some point or not.

It's written for beginning students who don't know very much about designing or programming experiments, so if you're reading this, you're probably more advanced than the target audience. So just save it and send it along to your students, if that's the case.

The published version is missing its Figure 1 -- I'm trying to get it fixed as we speak -- but in the meantime, you can get it here:

Comments or questions? Please leave them here. I have enabled comments on these different pages just for this reason.

02/19/14; 03:36:05 PM

I spent this morning working on what's hopefully a final set of revisions on a chapter we're writing for the book celebrating the festschrift for Larry Jacoby that was held at Washington University this past summer. The title of the paper is "Understanding the relation between confidence and accuracy in reports from memory," and it will be appearing in Remembering: Attributions, processes, and control in human memory.

It's a nice complement to the chapter we wrote in 2012 for Memory and law. If the relationship between confidence and accuracy in memory reports is something of interest to you, make sure to read the 2012 chapter and this next one when it comes out. I'll be sure to talk about it when it does.

Thanks for taking a look!

02/19/14; 03:25:29 PM

I'm working on my poster for the graduate student symposium which is coming up next Saturday (the 22nd). It's on the president's project. I'm using my default poster template but am having trouble deciding what figures should go in to the poster. I handed off copies to a few lab members to get their feedback and I'm excited to see what they think.

02/13/14; 03:09:51 PM

We've been running a lot of experimental subjects in the Memory Lab this week, and when doing so it can be hard to focus on lengthy projects. I've been having a little fun in the meantime getting a little personal data tracking program going.

Right now I am recording and publishing one variable: nightly sleep, in hours. If you go to my personal analytics page, you'll see the figure. I'm not quite sure it's working properly yet. What happens is every morning my Jawbone UP24 records the time I wake up. Through IFTTT, a new line is created in one of my Google Spreadsheets. The spreadsheet updates the figure and the figure displays on the site.

I had some trouble getting it set up yesterday because I was trying to use the =NOW() function, which is volatile (which means it is recalculated every time a change is made to the website). Instead I had to use some other spreadsheet trickery to get things to work. I am reasonably certain it should update properly when I wake up tomorrow morning.

I'm looking for other interesting or meaningful things to track and display. I have some other neat ideas -- Tweets by day were up here briefly yesterday before I took them down (quality control issues) -- but if you have any interesting ideas you should help me test the comment system here and leave a comment.

02/13/14; 12:46:20 PM

Last built: Thu, Mar 13, 2014 at 8:39 AM

By Andy DeSoto, Monday, February 3, 2014 at 12:18 PM.