I am quite certain that my training with a BJCP class improved my ability to detect certain aspects of beer that had previously flown under my radar, but I am also cognizant of the fact that my palate is in constant erosion in terms of sensitivity - so I really enjoy being paired with a younger palate to see what they get out of a particular beer.
As to statistical analysis of something so subjective as beer evaluation, I lean towards Mark Twain’s comment: “Lies, Damn Lies, and Statistics.” Yet I really enjoy your XBmts, so please continue…we may yet crack the “Beer Genome”.
What’s interesting to me is that 3 of my most reliable participants, guys who have missed maybe only 3-4 xBmts, are Certified judges and, in my opinion, great beer evaluators… 2 were a hair under 50% accurate and 1 was a hair over last I checked. My Coors Light drinking neighbor, on the other hand, a dude who vehemently hates IPA because it tastes “soapy” was closer to 60% accurate, and he has no clue what he’s looking for.
Even if a person can’t tell WHAT the difference is, we’re interested simply in their ability to detect a difference.
I don’t disagree, I just find so many things interesting!
Yep, I’m discovering that as I age my sense of smell, and therefore taste, comes and goes. Couple that with a reduced tolerance of alcohol and you get the reasons I seldom judge any more.
It just occurred to me (I am a bit slow) that while the data analysis is interesting, it is correlated from tests that are NOT testing the test taker - rather testing for a specific difference in beer. I may misunderstand how triangles really work, but the foci is on the test. We might be able to correlate tester confidence in their answers, but not measure their specific skills by reverse engineering this data.
In other words, when people say, I am good at taking triangle tests, that is a meaningless statement. Specifically that being right or wrong is null in the analysis and a triangle test levels the playing field between unskilled and skilled when applied in this manner. It is a form of implied confirmation bias that there SHOULD be a difference between the beers, when statistically there may NOT be a difference, based solely on the tasting skills.
I feel like we always get caught up in our hypothesis that we should taste the difference, and that somehow our skills are lesser if we do not.
Then again, I might just suck at triangle taste testing
Good point Matt. In taste analysis of beer, lack of confidence probably means second guessing. Something people who are in the process of learning new skills tend to do. In other words, I still believe the stat of judges in training being the worst at accuracy in blind triangle testing makes perfect sense.
Yes, it’s likely there really is no perceptible difference caused by a lot of the variables tested. However, a second possibility is that there is a difference but it’s under most people’s taste threshold (and therefore doesn’t matter). A third possibility is that something about triangle tests (palate fatigue / sipping small amounts / mixing successive drinks in the mouth and nasal cavity) makes triangle tasters less sensitive than regular beer drinkers.
I lean towards option 1, if only because I enjoy the fact it undermines 95% of received wisdom about how to brew.
I know some folks who are huge in to Budweiser, as in back in the day they’d travel to the different breweries and compare the product. I wonder how sensitive their palates are, as they were able to distinguish minute differences in a pretty flavorless beer.
Since this is allied with this subject, I’m carrying on here.
I do find that untrained palates are often less likely to discern differences in beer flavor and character. When tasting panels are convened in commercial settings, they are often trained and graded for sensitivity in a number of sensory areas. I’m concerned that the use of ‘regular’ untrained tasters that have no guidance as to what differences they should be looking for or an ability to recognize them, leaves this testing result with an overly skewed result of…can’t tell a difference. Its not until you have a ‘clubbed over the head’ difference in beers that a viable result can be noted. I feel that’s not good for science and not good for brewing improvement.
While I applaud the explorations that Brulosophy conducts, the results point out the mediocrity of an untrained palate that has no idea of what it might need to note as a difference. Since most of these tests compare nuanced differences, it is probably also appropriate to include more focused assessments and comparisons using trained palates to help discern if there are differences. I like that the authors of these various exbeeriments often try to explore differences in their beers with their full knowledge of their brewing differences, but I’d like to see more trained palates included in that assessment. Triangle testing does help reduce randomness in the assessments, but I would like to know that there has been an opportunity for the taster to focus on what the potential difference or flaw is and if its really perceptible.
Since these beers are often decent, similar beers, I’m not surprised that the tasters can’t perceive a difference between them. But I don’t want to automatically apply a finding of ‘makes no statistical difference’ to an experimental trial with that measurement alone. Remember, the majority of beer drinkers think that Budmilloors is great beer.