Growing evidence shows scammers are using AI‑generated videos of well‑known YouTube personalities to promote fake investment or training programs. These schemes claim they’re endorsed by the creator or platform, promising huge returns if you “follow the system”. Yet many victims end up losing money.
What’s happening
- Fraudsters are publishing ads and videos where the image and voice of a popular creator (such as a well‑known YouTuber) are synthesized using AI. These are then used to claim the creator personally endorses a “get rich quick” platform.
- Viewers are lured into signing up or purchasing “exclusive access” tools, believing they’re working with the legitimate influencer or brand. The reality: the money often vanishes and the platform disappears.
- Regulatory bodies are now issuing warnings. For example, the Central Bank of Ireland noted a 21 % rise in AI‑deepfake scams over recent months where scammers impersonate real figures to promote fraudulent investment schemes.
Why it matters
- This trend represents a new frontier in fraud: AI makes it possible to clone a celebrity’s likeness convincingly, lowering barriers for scammers and raising risks for everyday consumers.
- For influencers and platforms, the reputation damage can be substantial—even if they had no part in the scam. Their image is being used without consent, which raises legal and moral issues.
- For consumers, the scale and sophistication of such scams mean more people could be vulnerable, especially if they act on an ad or message without thorough verification.
What to watch
- Whether platforms (YouTube, Instagram, TikTok) strengthen measures to detect and remove deepfake ads and influencer impersonation.
- If legal outcomes follow: Are creators and brands able to sue platforms, or will regulators impose stricter oversight?
- Signs you can watch for: unsolicited influencer promos, unrealistic high‑return claims, requests to pay for “first spots,” and suspicious URLs or payment methods.



















0 Comments