We use risk factors to determine a user's level of suspicion. If a user has more than one risk factor associated with their behavior, we defer to their primary risk factor.
Why it's a risk
We didn't find user risk associated with the session.
The user’s IP address originates from a colocation data center. Data centers are shared computing systems that lease space to remote customers. We maintain a database of data center IP address ranges.
Data centers are favored by bot networks because they're secured locations where humans aren't typically allowed. For example, it's not likely that a session coming from an Amazon Web Services data center address block is a valid human user.
A proxy server acts as a substitute, or hub, for a user’s internet traffic. They're commonly used to anonymize a user or hide their identity. Proxy servers are commonly referred to as virtual private networks (VPNs).
Proxy servers are typically used to conceal a user’s location or identity. This makes it difficult to differentiate fraudsters from legitimate traffic. It's not common for consumers to use a proxy service to hide their identity.
The Onion Router (TOR) is a protocol developed to anonymize web traffic and protect a user’s identity. Legitimate users include whistleblowers, law enforcement, and users in censored countries.
Fraudsters can use TOR to conceal their location and usage information from anyone conducting network surveillance or traffic analysis. Most consumers do not try to hide their identity with TOR, making its use suspicious in ecommerce.
Traffic that originates from a university network carries a higher risk of fraud than other internet traffic.
Networks at universities have large user pools with constantly-changing information, making them a preferred hiding place of fraudsters. Also, universities historically have higher rates of transaction fraud.
The same IP address shows up on multiple sites that are owned by the same publisher, at a frequency beyond normal human activity.
Publishers that buy bot traffic will spread the traffic across an array of websites.
Detect non-human traffic by measuring the time between clicks on a website.
Bots are computer programs that perform repeatable actions, precise repetition of activity is an indication of non-human traffic. A human user has variations in click timing and patterns.
User ID Rotation
The same IP address is used for multiple sessions with varying user IDs.
Typically, the relationship between a user’s IP address and an ecommerce account are one to one. Occasionally a consumer will have multiple IPs. However, a significant quantity indicates fraud.
User Agent Rotation
The User Agent String for traffic comes from the same IP address and changes from session to session.
As new sessions start, bots will use rotating details (like browser type or operating system) to try to pass as legitimate human traffic. It’s not likely that a real user’s IP address will continually change user agent details. Real people tend to use the same devices regularly.
Note: This can happen when users are behind a large organization's proxy server. Therefore, it must be an abnormally high session count to qualify as suspicious.
Unusually large amounts of matching keyword referrals with similar identifiers, like IP address or destination.
Keyword patterns associated with lots of traffic indicate click farms and bot traffic.
We maintain a database of known bot traffic sources.
Known bot sources will probably continue to have bot traffic.
Spoofed User Agent
The User Agent String is identified as fraudulent. We can detect characteristics of a user and compare them to the browser's user agent string.
If the user agent string is fake, the traffic is probably driven by a fraudulent user. The technical lift and lack of benefits for a legitimate user makes it unlikely that a real user will modify the user agent string.
Browsers report the viewability of a session or advertisement. Unviewable sessions are minimized, briefly visible, or originate from a headless browser.
Real traffic requires the ability to view a webpage to navigate the site, view ads, and make purchases. If a session isn't viewable, the traffic is probably from a bot.
To avoid detection, fake traffic rotates referrer cookies to appear human.
Typically, the relationship between a user and a cookie is one to one. We assign a high level of suspicion when the same IP address, or other identifier, is recorded with multiple cookies.
By measuring the number of clicks in a session, we can detect non-human traffic. A variance from normal human activity indicates that the user is a bot.
If a script is acting, the number of clicks can be extremely low (by avoiding clicks with code) or extremely high (if the bot navigates poorly). Either of these can indicate suspicious activity.
When users start a web session, details about how traffic found the website are transmitted. Search engines like Google published structures of this data, making it difficult to accurately fake searches. This traffic can also be made up of referrals from an inactive search engine, like 7Search.com.
When the origins of a visitor's search details are wrong, the traffic can be identified as invalid.
We maintain a list of traffic referrers that are known sources of bad traffic, like bot farms and click farms.
Traffic from known bad reference sources is not considered valid.
Bots that crawl websites to steal information to add to spam lists, like email addresses or phone numbers.
Spam Bots serve no purpose other than to steal information to add to spam lists.
Updated about 1 year ago