Non-centralised behaviour such as those that characterise swarm robotics systems are vulnerable to intentional disruptions from internal or external adversarial sources. Threats in the context of swarm robotics can be executed through goal, behaviour, environment or communication manipulation. Experimental studies in this area are still sparse. We study an attack scenario performed by actively modifying the data between authorised participants. We formulate a robust probabilistic adaptive defence mechanism which does not aim at identifying malicious agents, but to provide the swarm with the means to minimise the consequences of the attack. The mechanism relies on a dynamic modification of the probability of agents to change their current information in view of new contradictory or corroborating incoming data. We investigate several experimental conditions in simulation. The results show that the presence of adversaries in the swarm hinders reaching consensus to the majority opinion when using a baseline method, but that there are several conditions in which our adaptive defence mechanism is highly efficient.