headshot of George Dinwiddie with books he's written

iDIA Computing Newsletter

September 2022

Unconscious Bias and Serendipity

I've been a conference reviewer many times, and am the Program Chair of AgileDC. I've always tried to be a conscientious reviewer, both helping the submitter make the best proposal possible and helping the conference offer the best program possible. I'm aware, though, that both of these goals imply some bias. Both are from the perspective of my point of view.

From my point of view, the proposed talk should give the attendees useful information. What information is useful? That's also from my point of view. Sometimes a session proposal advocates for behavior that I think is a bad idea. I have some strong beliefs, and if the proposed topic goes against those, I'm likely to recommend against the session. That's part of offering the best program possible.

From my point of view, the abstract should be clear, easy to understand, and should let the intended attendees understand what value they will receive in order to entice them to the session. That's part of making the best proposal possible. There's a middle ground between telling the punchline of the talk and being so coy that people aren't sure if it will be valuable.

I prefer to read the content of conference submissions first, without looking to see who submitted it. This is part of my strategy for trying to minimize my biases. I don't want to reject a good talk because I'm unhappy with the person submitting it for some other reason.

Some years back, I read a talk proposal that sounded completely flat to me. I was ready to vote "Reject" when I looked back to see who submitted it. I was surprised to see that it was a woman I knew and respected very much. I re-read the proposal, this time hearing it in her voice. This time I understood the value in it. I also realized the reason (or part of the reason) that I had been inclined to pass on it after the first reading. The benefits seemed understated and tentative. Much of the abstract was in passive voice rather than active. As it was targeted for managers and leaders, this seemed at odds with the expected decisiveness of the audience.

In the same realization, I noticed that this was bound up in my gender expectations that were a product of being raised in a gender-biased society. While many suggest that blind review of submissions, that is with the submitter unknown to the reviewer, can prevent bias, this told me clearly that it cannot. In this case it would have prevented me from noticing and correcting for my bias. I suggested that she edit her abstract to offer bolder benefits, and I happily voted "Accept" on the submission.

The mere fact that a reviewer is judging one submission as better than another is dependent on the biases of that reviewer.

Kent Beck has said

Write to the program committee. Never forget that before you can write to the vast, eager, and appreciative OOPSLA audience you must first get past the program committee. Before I begin I fix in my mind a picture of a harried PC member, desk piled with papers. Mine comes to the top. I have maybe thirty seconds to grab their interest.

Remember that the program committee is made up of experts in the field. Even if your topic is of broad interest to beginners, there must still be some spark in it to keep an expert reading to the end. If your topic is highly technical, it may not be in an area that they are familiar with, so it must readably present the novel aspects of the work

The flip side of this is that a well-written submission does not necessarily indicate that the talk, itself, will be helpful to the audience. Giving a good talk and writing a good conference proposal are two different things.

Ferric C. Fang and Arturo Casadevall describe a similar gulf between a well-written grant proposal and significant scientific outcomes. "What is the desired product of scientific research? This question does not have a simple answer, but one measurable outcome is the generation of primary research publications, which are in turn cited by other publications."

The problem is that the peer review process commonly used is unable to predict this success. "The very structure of the NIH peer review system may encourage conformity and discourage innovation of the type that could lead to scientific revolutions." That process can, on the other hand, be demonstrably shown to have significant bias in the selection. "Sources of potential bias in peer review include cronyism and preference or disfavor for particular research areas, institutions, individual scientists, gender, or professional status."

The proposed solution is to hold a lottery, with some caveats. While it's impossible for peer review to predict what research will result in novel and important discoveries, experts are "generally able to weed out proposals that are simply infeasible, are badly conceived, or fail to sufficiently advance science." Performing this review first, and then holding a lottery on the remaining proposals relieves the reviewers from having to make impossible predictions of the relative merits of good proposals, and removes many sources of bias.

Conference program selection is subject to all the same sources of bias as grant reviews. For that reason, the AgileDC conference is doing something a little different this year. Instead of trying to order all the submissions by the consensus of perceived value by the review committee, we're injecting a little randomness into the process. Randomness, by definition, cannot be biased. If it's biased, it's not random.

In short, our process was as follows:

  1. Review submissions to make sure they passed a relatively low bar of appropriateness for the conference. Almost all submissions did.
  2. Hold a lottery to select the speakers. To avoid giving an advantage to those who submitted multiple proposals, we selected on speakers rather than on proposals.
  3. For each selected speaker, the committee looked at the proposals they had submitted and selected the consensus favorite.

It's too soon to tell the effect on the quality of the program, if we'll ever be able to ascertain that. It certainly has reduced the work of making the selection. While some of my favorite submissions did not win the lottery, that's always the case. And lobbying for submissions that seem most attractive to me is a clear example of bias.

For more background on using a lottery to reduce bias, I refer you to Adam Cronkright, co-founder of Democracy in Practice and their experiments with selecting student leadership by lottery. These ideas go way back, though, at least to political reforms instituted by Cleisthenes in Athens, Greece between 508 and 507 B.C. to curtail the power of the aristocracy.

/signed/ George

P.S. What techniques do you use to counter your own biases and prejudices? This is a topic that interest me broadly, not just in conference talk selection. If you want to talk, schedule a Zoom call and let's explore the topic.

Schedule Zoom Call

Or you can also simply reply to this email or send an email to newsletter@idiacomputing.com to continue the conversation. There's a person, not a bot, on this end. I'd really love to hear from you.