Investigating the Bad Palates Argument | A Graphical Look At xBmt Performance

The rate of statistically significant exBEERiment findings currently hovers around 35%, a respectable number in my opinion, though far lower than many have expected given the presumable importance of the variables investigated. While it’s only natural to come up with excuses as to how this could be, none have been more popular than what I’ve come to call… The Shitty Palates Argument. Greg and I sought the assistance of bloggers Scott Janish and Justin Angevaare to help us compare triangle test performance rates based on level of experience. Are BJCP judges actually better at distinguishing differences than general beer drinkers? Read to find out!

Side bar: I love that you got Janish in on this.

Interesting data point that wasn’t touched on in the article: the % correct responses for “provisional” judges was a full 20% lower than everyone else. Any thoughts on why that is? Lower sample size or something more intriguing?

Scott is a badass!

I noticed that odd stat as well and honestly have no clue why it occurred. What i find most curious is the fact what you pointed out is only true when looking at only significant xBmts, as provisional judges performed about the same as recognized and higher on all xBmts combined. Anomaly perhaps?

If you assume that most or all BJCP judges in training are craft beer enthusiasts and home brewers, accounting for why they suddenly fall way short of their otherwise contemporaries could be explained by understanding how people learn a new skill. Frequently, while assimilating new knowledge, people drop dramatically in ability while they are struggling to get this new info to fit into their world.

I have taught a certain set of psychmotor skills to professionals in my day job. At the beginning of the session, if you have them perform an open skills test, using their existing skill level, they do fairly well. Then teach them several new skills for a few hours. Then go back to the original test, trying to use the new skills. They almost always perform worse than they did before the learning. Thats why its a futile plan to teach too much too soon when it comes to psychomotor skills. Analytically tasting beer is a psychomotor skill. Its hard for new judges to get out of their head. It makes perfect sense to me that a craft beer enthusiast would do fairly well, an experienced home brewery slightly better, and an experienced judge slightly better yet, and the judge in training worst of all. Give them time, they will be at the top.

I agree with Jim, and also have seen that sort of thing happen at work. They’re more likely to overthink the issue, and subsequently doubt their initial response a bit more.

This data just seems reasonable to me. It makes sense that on average, BJCP judges have palates that are, well, average.

I’d hope this would encourage folks with the time to study to become BJCP judges, especially if they’ve ever felt that their palates aren’t good enough.

Something to note-- many of the provisional judges from a year ago are now recognized or certified judges today, and the provisional judges today will likely be recognized or certified a year from now. I’m not convinced their performance was hindered by overthinking at all, but rather the fact any differences between the samples simply weren’t noticeable. I’m biased, of course, as I taste a large majority of the xBmt beers and triangle myself every time, it’s not as easy as people think… it’s not as easy as I thought it would be.

This is very interesting.

Nice work on the xbmt – it needed to be done.  Pretty clear results.

All in all, seems pretty reasonable. Thanks, Marshall !

Thumbs up.

Is there any evidence in the data that specific individuals consistently do better than average (supertasters)? If so, shortlisting them might make taste tests more sensitive (though possibly to things that don’t matter…)

We can look at specific individuals this time around, but I have in the past, about five months ago I believe, and there was no pattern whatsoever-- even higher ranked judges were wrong about 50% of the time.

I did two tastings at NHC. Got 50% wrong.  :wink:

I prefer to look at it as you getting half right! Haha.

Where either of those significant? I think I remember no, which means I’m some measure better than average! [emoji14]

Not even close to significant.

Well, I got 'em both right and I couldn’t smell anything.  Does that make me insignificant?

This doesn’t surprise me, either.  We all perceive thing different ways, and at different thresholds.

Of course not! Just statistically lucky  :-\

In the BJCP, like anything else in life, status is a combination of skill and ambition.  We all know that there are bad judges out there.  Have you thought about going back and  collecting individual statistics, a “batting average” if you will, for all of your participants across every experiment?  You may find that some people are better tasters than others, even if it doesn’t correlate to BJCP membership.

Interesting. I wouldn’t expect judges to do any better than average, but I thought you might by luck have netted a supertaster or two among your volunteers or just a few people with better than average taste/smell sensitivity. Doesn’t sound like it though. Supertasters tend to be Asian women - not so common in craft beer circles.

Add to that train of though - everyone has a bad day. Sometimes my allergies act up. And so on.

Then there are people that might be flavor blind to what the difference is.