We want that evidence to be empirical! So we form protocols and controls and experiment to produce empirical evidence. Or we have others do it, and we learn from their work because we trust that they did it well and can check that to our satisfaction in their published literature, our peer review processes, and the credibility of the journals that take them, and the others that replicate their results.
Sometimes the empiricality of evidence gets questioned, by ourselves or by others.
"The researchers could have done this extra thing,” or “They should have controlled for this variable of interest to me."
The criticism centres on a lack of specificity in the evidence manufacturing process. Fair enough! Shouldn’t we be completely specific when manufacturing new evidence?
Specificity, after all, is the very basis of empiricality.
But the fact is (or is it a fact? It's observable and testable, but is it pure objective truth?) that no matter how exhaustively specific you are, you can always be more specific.
Test my claim if you like: Find or write any definition that you feel is an example of ultimate specificity, show it to me, and then concede the point when you see me make it more specific.
Even when you produce a definition that is as specific as you know how to be, that doesn't mean it is the most specific it's possible to be. Just because you’ve produced a definition that’s more specific than any other definition ever to exist, also doesn’t mean it is the most specific it’s possible to be – it’s merely the most specific definition that has ever existed so far.
Yet you have to use something for your evidence production. You can’t plumb the infinity of specificity for the rest of time to reach a theoretical endpoint before you begin your process of producing evidence. You simply reach a point of specificity that is adequate for you (or your stakeholders) and you move the work ahead, produce the evidence, and share the knowledge.
That’s an arbitrary point to reach. And it's determined purely by how satisfied we feel about the level of specificity we decide to use.
Because of our inability to achieve ultimate specificity, evidence therefore can never be truly, objectively, empirical. It can only ever be empirical enough for an individual to accept and choose to consume.
The point is there's no bedrock of specificity; and therefore there is no ultimate form of empiricality.
Evidence is not either "empirical" or "nonempirical". It's only, ever, "less empirical" or "more empirical" in relation to other evidence.
All evidence we produce and consume and base our opinions and beliefs and convictions on is done by drawing our own arbitrary line in the infinite shifting sands of empiricality.
We each make the choice to accept evidence and to form opinions, beliefs, and convictions based on our own completely arbitrary threshold of acceptable empiricality.
Sometimes we sneer at others for having a threshold lower than ours, and polluting their beliefs with low quality information, or scoff at those with higher thresholds at how they damage their living experience with their closed mindedness.
That's embarrassing, because all our thresholds are equally arbitrary (and they change all the time, depending on how we feel and how badly we want to accept a piece of evidence).
Next time you learn something, take a look at how empirical your source was. If it was less empirical than other sources you learned from, ask yourself why.
What you learn about yourself in the process may not be empirical, but it might be useful to you.
And what better purpose could knowledge ever serve than that?
Thanks for learning!