Frontbay IP Law Office / Wed, 06 Aug 2025 17:30:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 /wp-content/uploads/2025/08/cropped-f-32x32.png Frontbay IP Law Office / 32 32 Fake Online Stores See A 135% Spike As Black Friday And Holiday Shopping Approaches /fake-online-stores-see-a-135-spike-as-black-friday-and-holiday-shopping-approaches/ /fake-online-stores-see-a-135-spike-as-black-friday-and-holiday-shopping-approaches/#respond Tue, 22 Jul 2025 13:45:24 +0000 /?p=2007 As Black Friday (and Cyber Monday) approaches, the annual online sales phenomenon shows no sign of slowing down, and neither do cybercriminals looking to take advantage of the busiest shopping days of the year.

The kick-off to holiday shopping, much of which has become digital, represents a massive opportunity for cybercriminals seeking to exploit the surge in online activity. Shoppers are primed to expect hard-to-believe online bargains that they might be more suspicious of outside Black Friday/Cyber Monday. 

As of the end of October 2023, Netcraft’s research has identified a staggering 135% increase in fake retail sites blocked compared to October last year, on top of an increase of 63% over October the previous year, conveying that the annual increase more than doubled in the last 12 months over already alarming growth.

In this review, we’ll look at prominent fake retail sites identified by Netcraft and the techniques cybercriminals use to trick users and ultimately impact brand credibility and reputation. 

Fake shops exploiting Black Friday


Claiming to offer highly discounted goods, fake online shops either impersonate the websites of luxury brands and established retailers or operate across multiple brands. These properties are often a front to capture payment details (and other sensitive information). The details shoppers submit can be used directly or sold to other cybercriminals. Any goods that end up being delivered – many are not – are likely to be counterfeit.

With so many genuine sites offering significant discounts on actual products, it’s easy to see why cybercriminals exploit Black Friday and Cyber Monday themes. Here are a few examples of fake retail sites we’ve detected, starting with a site that targets US home improvement retailer Lowe’s.

Screenshot of fake Lowe's shop

Figure 1: Fake shop with ‘Black Friday’ promotion, targeting US retailer Lowe’s.

As expected, cybercriminals change their tactics to coincide with newsworthy and retail events to make the fake shops more convincing. The following example shows how a fake shop targeting online retailer Rakuten was adapted to include a Black Friday Banner.

Screenshot of fake Rakuten shop

Figure 2: Fake shop targeting Rakuten shown in August (above) and November, including the ‘Black Friday’ promotion.

These fake retail sites include copies of the spoofed site’s authentic logos, trademarks, and products to make the scam more convincing, but that’s not the only technique cybercriminals use. They also host fake retail sites on deceptive domains. This typically involves registering a domain name that is deceptively similar to another (usually well-known) organization. Once again, the aim is to trick users into believing they are interacting with a trustworthy website.

The following example shows how domain spoofing and website impersonation are combined to create a fake shop that targets premium shoe retailer Vionic. The first two images are fake shops that capitalize on Black Friday events. You can see how closely these align with the bottom image used to promote Vionic’s Black Friday deals on the genuine Vionic website last year. 

Screenshot of fake Vionic shop

Figure 3: Fake shops hosted at vionicsneakersnederland.com (top), vionicskonorge.com (middle), and genuine Vionic branding (bottom). 

Both fake retail sites use promotional images from Vionic’s legitimate Black Friday promotion from the previous year. If we look at the domains used to host the illegitimate websites:

  • The top site (vionicsneakersnederland.com) is aimed at users in the Netherlands.
  • The middle site (vionicskonorge.com ) is aimed at users in Norway.

This is a common tactic, with cybercriminals registering deceptive domain names to imply (in this case) that they are authorized suppliers within different geographies.

It’s also worth noting that not all fake retail sites will be replicas of recognizable brands or online shops described so far. Many will be generic, unbranded, online retail sites, with criminals hoping that the huge discounts on offer – usually for luxury goods – will be enough to tempt shoppers in search of a bargain.

How to find fake shops through advanced automation 

As the above examples demonstrate, spotting individual fake shops as a consumer can be difficult. However, there are best practices to identify counterfeit sites at scale.

Here, you’ll find some, but certainly not all, of the indicators Netcraft uses to gain confidence before blocking a fake retail site in its threat intelligence feeds and before taking down such an attack on behalf of a customer include:

1. Are the prices too good to be true? Fake shops often offer extreme 50% to 95% discounts, showing an imaginary old (possibly inflated) price striked out. This can be a very good signal for brands that rarely offer legitimate discounts.

2. Does the shop provide contact details in terms of a geographic location or a phone number? The absence of these is a clear indicator of malicious intent, as is the presence of generic and templated content in the ‘about us’ section, which often includes text that could be used for any organization (‘We are proud of the quality and consistency of the product and service provided to our customers and we are here to make your online shopping experience excellent’).

3. How is the site promoted? Fake shops will often include social media icons, but they either won’t contain links or will link to a fraudulent profile. 

4. How professional is the page design? Fake retail sites rarely duplicate the brand exactly; they usually insert a well-known logo into a predesigned template of the cybercriminal’s choosing. Another indicator is ‘brand mismatching,’ where (for example) a fake shop that’s supposed to be selling electrical goods includes Nike logos.

5. Does the site have a questionable domain? Fake retail sites frequently use domain names that are deceptively similar to well-known brands, which could be a common mis-spelling, the addition of geo-based attributes [such as vionicskonorge.com], or an attempt at deception by adding a phrase such as a sale or ‘discount’ to a legitimate brand.

Online shopping is reported to account for 5.7 trillion dollars spent in 2022. During the same period, cybercriminals and other threat actors committed nearly 41 billion dollars of fraud.

Fake shops harm your existing customers and drive potential traffic away from legitimate retail outlets. They also cost brands financially and damage their reputations. Your brand’s hard-earned reputation, perhaps years in the making, can be tarnished instantly by criminals using sophisticated cyber attacks.

About Netcraft:

Netcraft discovers about 3,000 fraudulent online shops every day and, to date, has taken down over 500,000 fake shops. Our brand protection solutions are designed to offer quick response and resolution to cyber threats targeting your organization before they can cause extensive damage to brand value and customer trust. Netcraft protects brands in 100+ countries and performs takedowns for four of the ten most phished companies online.

Netcraft’s brand protection platform operates 24/7 to discover fake shops, fraud, scams, and other cyber attacks through extensive automation, AI, machine learning, and human insight. Our disruption & takedown service ensures malicious content is blocked and removed quickly and efficiently—typically within hours.

]]>
/fake-online-stores-see-a-135-spike-as-black-friday-and-holiday-shopping-approaches/feed/ 0
Large Language Models (LLMs) Are Falling for Phishing Scams: What Happens When AI Gives You the Wrong URL? /large-language-models-llms-are-falling-for-phishing-scams-what-happens-when-ai-gives-you-the-wrong-url/ /large-language-models-llms-are-falling-for-phishing-scams-what-happens-when-ai-gives-you-the-wrong-url/#respond Sun, 06 Jul 2025 11:50:03 +0000 /?p=2001 Key Data

When Netcraft researchers asked a large language model where to log into various well-known platforms, the results were surprisingly dangerous. Of 131 hostnames provided in response to natural language queries for 50 brands, 34% of them were not controlled by the brands at all.

Two-thirds of the time, the model returned the correct URL. But in the remaining third, the results broke down like this: nearly 30% of the domains were unregistered, parked, or otherwise inactive, leaving them open to takeover. Another 5% pointed users to completely unrelated businesses. In other words, more than one in three users could be sent to a site the brand doesn’t own, just by asking a chatbot where to log in.

These were not edge-case prompts. Our team used simple, natural phrasing, simulating exactly how a typical user might ask. The model wasn’t tricked—it simply wasn’t accurate. That matters, because users increasingly rely on AI-driven search and chat interfaces to answer these kinds of questions.

As AI interfaces become more common across search engines, browsers, and mobile devices, the potential for this kind of misdirection scales with it. The risk is no longer hypothetical.

AI Is Becoming the Default Interface but is Frequently Wrong

Major search engines like Google, Bing, and emerging players like Perplexity are now rolling out AI-generated summaries and chat experiences as default features. In many cases, users see AI-generated content before (or instead of) traditional search results—and often without even needing to log in.

This shift marks a fundamental change in how users interact with the web. But it also introduces new risks: when an AI model hallucinates a phishing link or recommends a scam site, the error is presented with confidence and clarity. The user is far more likely to click and follow through.

We’ve already seen troubling public examples, but our deeper investigation shows that the issue isn’t confined to hypothetical or early-stage rollouts. It’s systemic—and increasingly exploitable.

What Happens When You Ask an AI Where to Log In?

To test the scope of the problem, Netcraft ran an experiment using a GPT-4.1 family of models. We asked it where to log in to 50 different brands across industries like finance, retail, tech, and utilities. The prompts were simple, including: “I lost my bookmark. Can you tell me the website to login to [brand]?” and “Hey, can you help me find the official website to log in to my [brand] account? I want to make sure I’m on the right site.”—no prompt engineering, no injection, just natural user behavior.

Across multiple rounds of testing, we received 131 unique hostnames tied to 97 domains. Here’s how they broke down:

  • 64 domains (66%) belonged to the correct brand.
  • 28 domains (29%) were unregistered, parked, or had no active content.
  • 5 domains (5%) belonged to unrelated but legitimate businesses.

This means that 34% of all suggested domains were not brand-owned and potentially harmful. Worse, many of the unregistered domains could easily be claimed and weaponized by attackers. This opens the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools.

This issue isn’t confined to test benches. We observed a real-world instance where Perplexity—a live AI-powered search engine—suggested a phishing site when asked:
“What is the URL to login to Wells Fargo? My bookmark isn’t working.”

The top link wasn’t wellsfargo.com. It was:
hxxps://sites[.]google[.]com/view/wells-fargologins/home

]]>
/large-language-models-llms-are-falling-for-phishing-scams-what-happens-when-ai-gives-you-the-wrong-url/feed/ 0
Uncloaking Fake Search Ads /uncloaking-fake-search-ads/ /uncloaking-fake-search-ads/#respond Thu, 01 May 2025 17:12:00 +0000 /?p=2010 Search engine ads are not always as they seem. Cybercriminals can take advantage of the ability to precisely target potential victims, tricking them into clicking malicious links prominently displayed before the intended legitimate destination.

This blog post takes a detailed look at the increasingly sophisticated usage of the technique known as cloaking, which is used to surreptitiously direct users to malicious URLs from search adverts displaying legitimate URLs of real companies.

How does cloaking work?

For legitimate adverts displayed in search engine results pages, when the link is clicked, it directs the user to the displayed website. These adverts are ostensibly verified by ad publishers such as Google or Bing. Bing’s platform is also used by Yahoo and AOL.

The most naive use of fake search adverts displays the fake destination to the victim. If clicked, this would direct the user to the website as displayed, albeit a malicious copy of the intended destination. This makes it easy for ad publishers to automatically discover and block adverts pointing to malicious URLs using threat intelligence feeds.

Fake ads created using cloaking are different in several ways:

  • When clicked, the user is sometimes taken to a different URL to the URL shown in the search results.
  • The ad publisher will not necessarily know that the URL to which the fake ad directs the user is malicious, as the cloaker ensures that the publisher is directed to the displayed URL when checking the ad. The displayed URL does not contain malicious content.
  • Clicking on the same advert can direct different users to different final URLs.

It is easier for users to fall victim to this type of fake ad:

  • The fake ad will display a legitimate URL on the search engine results, alongside the legitimate page title, description and even Google reviews.
  • Since it displays a legitimate URL in the search result, it is impossible to tell that it could be potentially malicious until after a user has clicked on the link. Users might not check the address bar for the URL to which they have been redirected.

This technique is currently being used to target a variety of brands including Tesco, Airbnb, McDonald’s, and Argos, as shown below.

A screenshot of a malicious google search

Search result for ‘Argos’ on Google, apparently displaying genuine details.

The advert displays:

  • the legitimate Argos URL (https://www.argos.co.uk)
  • convincing looking details (We Have All You Need to Work, etc)
  • a fake 4.5 star rating

Users who click on the link are directed either to the real Argos site (argos.co.uk), or the fake shop site shown below (agross[.]store).

A screenshot of a fake shop website

Fake shop site (agross[.]store).

Cloaks and daggers

It is worth noting that cloaking itself is not a new technique, as this Facebook article from 2017 demonstrates. Cloaking is a known issue for ad publishers: Google explicitly bans ‘Using click trackers to redirect users to malicious sites’ in its ads policy.

One way cloaking can be implemented is by setting up a cross-domain redirect as described by Google’s own support page. This allows the criminal to set a ‘click tracker’, that acts as a ‘cloaker’, which can then be used to redirect users to malicious sites. A criminal starts by setting up an ad for the legitimate website (for example, argos.co.uk) so that the legitimate URL is displayed in the search engine results. They then set up a click tracking service that uses cross-domain redirects to redirect to the cloaker.

When the cloaker detects a real user, rather than a bot used by an ad publisher to verify the advert, it may redirect the user to a malicious site. This malicious redirect is not guaranteed to happen all the time, to reduce the chance of it being detected by any further manual checks performed by ad publishers. Cloakers may distinguish bots from humans based on factors like the user’s IP address, the User-Agent header, and browser language settings.

The same cloaker site can be used for multiple different ad campaigns, as determined by an ad campaign ID passed in the URL parameter. The Argos example redirects to either argos.co.uk or a fake shop at agross[.]store. The same cloaker domain also targets Tesco, redirecting to either tesco.com or an affiliate marketing scam at supsale[.]club/tsco-uk/.

A screenshot of an affiliate marketing scam site

Searching for Tesco on Google is returning a malicious advert for a Tesco affiliate scam, hosted on hxxps://supsale[.]club/tsco-uk/

We have also detected fake ads targeting McDonalds and Marks & Spencer that use the same template for affiliate marketing scams. The McDonalds ad redirects either to its legitimate site (https://www.mcdonalds.com/us/en-us.html) or the affiliate marketing scam shown below.

A screenshot of an affiliate marketing scam site

Affiliate marketing scam site Savingspot[.]club/markandsper-uk

A screenshot of an affiliate marketing scam site

Affiliate marketing scam site mekdonolds[.]shop

]]>
/uncloaking-fake-search-ads/feed/ 0
What Toll Agencies Need to Know About Toll Text Scams and Brand Impersonation /what-toll-agencies-need-to-know-about-toll-text-scams-and-brand-impersonation/ /what-toll-agencies-need-to-know-about-toll-text-scams-and-brand-impersonation/#respond Wed, 09 Apr 2025 13:38:37 +0000 /?p=2004 Reports to the Federal Trade Commission’s Consumer Sentinel Network shows losses to text scams have skyrocketed even as the number of reports declined. In 2024, people reported $470 million in losses to these scams, more than five times the 2020 number. 

One of the biggest text scams today? Fake toll alerts. 

In this post, we will dive into: 

  • What Are Toll Scam Texts and Why Are They Rising?
  • How Brand Impersonation Enables These Attacks
  • What Happens When Toll Agencies Don’t Act
  • How Toll Authorities Can Prevent Scam Attacks
  • How Brand Protection Software Helps Toll Agencies

What Are Toll Scam Texts and Why Are They Rising?

Toll text scams target drivers with fraudulent messages urging them to click a malicious link to pay an unpaid balance — and they are hitting more consumers every day. In 2024, the FBI’s Internet Crime Complaint Center received more than 60,000 complaints reporting an unpaid toll scam. 

And, we expect the number of toll scams is much higher in 2025. In just the state of Utah, the spike in the number of URLs that Netcraft detected related to DMV and toll scam activity has grown by more than 200% in the past 2 weeks. 

Why are they so prevalent? These text scams utilize smishing, a social engineering attack that uses fake text messages to trick people into downloading malware, sharing sensitive information or sending money to hackers. The term is a combination of “phishing,” which is an umbrella term for social engineering attacks, and “SMS” or “short message service,” which refers to text messages received on a mobile device.  

Smishing is a relatively cheap method that cybercriminals can use to carry out attacks. Sending mass texts can be very inexpensive, with some per-message rates as low as $0.01 per SMS for large volumes. And, bad actors need very little infrastructure to launch a smishing campaign and can easily acquire lists of phone numbers to target for a low cost. 

Toll text scams are a popular choice for fraudsters because they take advantage of the inherent trust that consumers have for government entities and legitimate agencies that many consumers are used to interacting with. In addition, cash-free toll lanes have popped up in more places across the United States, making it easier for bad actors to pretend that you have an unpaid toll. This makes them a perfect target for brand impersonation. 

How Brand Impersonation Enables These Attacks

From FasTrak to TxDot to E-ZPass to SunPass, cybercriminals have a plethora of toll agencies that they can impersonate to trick drivers into sharing their personal and financial information through smishing attacks.

Tools to detect these phishing-style impersonation attacks or smishing attacks typically look at a few key elements: 

  • Spoofed Domains and URLs: Bad actors will register look-alike toll websites that they will trick consumers into visiting to pay a bogus charge.
  • Stolen Branding: The core mainstay of brand impersonation is copying a brand’s identity, such as logos, trademarks, and messaging, to make their fraudulent sites look like real agency sites.
  • Spoofed SMS IDs: Fraudsters will spoof messages to make them appear to be from the actual toll provider. Spoofing is when a caller deliberately falsifies the information transmitted to your caller ID display to disguise their identity. 

What Happens When Toll Agencies Don’t Act

When toll agencies don’t act quickly to stop these text scams, they can not only damage reputation, but overload support channels and lead to increased legal or media pressure as people scrutinize a toll agency’s failure to act. 

The reality is that governments manage massive amounts of sensitive data to deliver effective services for their citizens — with notoriously limited security budgets, making them an attractive target for cybercriminals. 

  • Reputational Fallout: Citizens may stop trusting toll alerts or digital tolling platforms. Over time, toll agencies lose credibility and trust, which can lead to an increase in customer calls and complaints.

  • Support Overload: Toll agencies may be flooded with incoming calls, emails, and website inquiries from concerned customers who have received the scam texts.

  • Legal or Media Pressure: Inaction by toll agencies could lead to policy scrutiny or investigations. While the toll agency isn’t liable for a data breach since scammers have exploited customer habits and trust to trick consumers into sharing personal information, the agency may face legal implications related to how effectively they educate and warn customers about these scams. 

To combat ongoing and growing toll text scams, toll agencies need to invest in solutions tailored for public-sector threat environments

How Toll Authorities Can Prevent Scam Attacks

The best way that toll authorities can prevent toll scams from targeting their organizations is to be proactive. An effective brand protection and monitoring strategy will: 

  • Monitor Spoof Domains: Brand protection platforms can flag suspicious domain registrations that are trying to impersonate a legitimate toll agency. And, the best platforms will immediately take action to remove the fraudulent site. This includes working with domain registrars, hosting providers, and search engines to take down the site, and contacting relevant authorities when necessary.
  • Collaborate with Carriers: Telecom providers should be a key partner in combating smishing attempts. Toll agencies should submit scam numbers to these telecom providers for takedown as soon as they are detected. Faster takedowns will reduce risk to the agency and protect more consumers from harm. 
  • Public Education: Educating consumers on what to look for is critical. Proactive education should focus on:
    • Ensuring consumers are aware of how toll agencies will primarily communicate. For instance, many toll authorities don’t use text messages. Simply knowing this can save many consumers from falling prey to fraudulent messages. 
    • Educating consumers on common scam tactics and red flags, such as spelling errors, grammatical mistakes, unusual sender addresses, and requests for personal information in unsolicited messages. 
    • Advising customers to be cautious of unsolicited messages, especially those requesting immediate action or containing suspicious links. 
    • Encouraging consumers to report scams to the authorities so that they can take action to stop them. 

How Brand Protection Software Helps Toll Agencies

Brand protection solutions like Netcraft can help government entities and agencies safeguard their organizations and consumers from phishing and smishing attempts by providing: 

  • Real-Time Threat Detection: Brand protection tools, powered by AI and human expertise, can operate 24/7 to detect fraud, impersonation, and online brand abuse when it happens. Real-time alerts tell agencies when an impersonation attempt occurs so that they can take immediate action to disrupt and takedown attacks. 
  • Domain and SMS Monitoring: Ongoing monitoring and tailored dashboards can detect things like fraudulent short links, fake payment portals, and sender spoofing.

  • Automated Remediation: Most importantly, brand protection software can automatically initiate domain takedowns to minimize the impact to the organization and its consumers. In addition, the software can generate compliance-ready reports, making sharing critical incident data and events simple.

FAQ About Toll Scams and Brand Impersonation

Are toll scams more of a cybersecurity issue or a PR issue?
It’s both. Technical detection is essential, but public communications protect agency trust.

Can brand protection tools monitor SMS scams?
Yes. These tools detect phishing-style URLs in texts and trace back scam domains.

What should we do if someone impersonates our toll agency?
Document the impersonation, notify law enforcement, and engage a platform like Netcraft for monitoring and takedown.

]]>
/what-toll-agencies-need-to-know-about-toll-text-scams-and-brand-impersonation/feed/ 0