Ensuring the integrity of the associations that support our conclusions.
In 2020, a number of 5G masts were set on fire in the UK because of a belief that gained traction online linking 5G to the spread of COVID-19. Whoever committed this arson seems to have been motivated by the connecting of a series of “dots.” First (dot one), that 5G towers may weaken the immune system; second (dot two), that this made people more vulnerable to COVID; third (dot three), that 5G therefore, in a way, “caused” the pandemic.
It is easy to connect dots in ways that seem logical to us but do not withstand scrutiny. Sometimes these connections start with a fundamental misconception (such as that 5G weakens the immune system, which it does not), sometimes they start with correct, or at least plausible, information but go astray somewhere along the way. This was arguably the case with hydroxychloroquine during the COVID-19 pandemic. Early observational studies suggested there might be some benefit to its use. This was amplified rapidly through media and politics. But large RCTs, such as the RECOVERY trial, found no benefit. Nevertheless, hydroxychloroquine was touted for its supposed effectiveness, even as the data did not support this hype. The initial, plausible data point suggesting hydroxychloroquine may have some utility was quickly connected to other dots that had no bearing on its actual effectiveness, leading many to draw the wrong conclusions about the drug.
Closer to home, there has been much critique leveled at social science, in particular, for perhaps connecting too many dots, for making causal inferences too glibly. Are we maybe too quick to assert that a particular upstream neighborhood condition causes an individual health indicator? Are we connecting too many dots there? Is this, in part, why we have a replication crisis? We cannot avoid these questions if we are to shape a science worthy of the work we are trying to do, the world we are trying to create.
Read more here