The Beacon bot detection model has evolved over time to utilize various metrics to make the determination as to the likelihood that a particular visit to a website is being carried out by a bot user or a human user. Current bots can be broadly classified as one of the following four types, equivalent to versions of the Beacon bot detection model (currently on version 4):

  • Absent – reported, paid for campaign clicks which are entirely missing. This type of bot presents the challenge of there being no data to allow protection tagging to take place but has more recently been mitigated against within Beacon by the introduction of type 4 bot detection.
  • Partial – reported, paid for campaign clicks which present only a click but no other visit data. Decisions must be made as to the likelihood that the lack of data is down to a deliberately aborted visit due to slow website response times or other network issues (“low-value clicks” terminology), or down to the malicious attempt to avoid detection and/or lack of JavaScript capability. The determination as to specific “bottiness” of this type can be enhanced by the presence of other indicators of types 3 and 4 bots.
  • Malformed – reported, paid for campaign clicks that present superficially legitimate journey data, but which on algorithmic inspection lack key humanity detectors, such as mouse movement or edge screen touch points on mobile devices.
  • Browser Invalid – the first time Beacon has evaluated all traffic landing on a website, no matter the source, this uses browser fingerprinting to determine if a browser is what it says it is. This is not in itself an absolutely guarantee of a bot, but is a strong indicator, as bots will generally use a headless browser programmatically to operate and will pretend to be something else. When combined with other types of bot detection, this is extremely powerful and a game changer in the rapid detection of and mitigation against bots.