LLMs to the Rescue: Securing Wearable IoT Data from Poisoning Attacks

By W. K. M Mithsara, Ning Yang, Ahmed Imteaj, Hussein Zangoti, Abdur R. Shahid


Published on November 24, 2025| Vol. 1, Issue No. 1

Summary\

A novel framework leverages Large Language Models (LLMs) to bolster the security of wearable Internet of Things (IoT) systems, specifically in Human Activity Recognition (HAR). Addressing the vulnerability of machine learning models to data poisoning attacks, this approach utilizes zero-shot, one-shot, and few-shot learning paradigms. It incorporates "role play" prompting, where LLMs act as experts to contextualize anomalies, and "think step-by-step" reasoning to infer poisoning indicators and suggest clean alternatives directly from raw sensor data. This methodology minimizes the reliance on extensive labeled datasets, providing a robust, adaptable, and real-time defense mechanism. Extensive evaluations demonstrate the framework's effectiveness in improving the security and reliability of wearable IoT systems across detection accuracy, sanitization quality, latency, and communication cost.
\

Why It Matters\

This research marks a significant shift in AI security, demonstrating the burgeoning role of Large Language Models (LLMs) beyond traditional text processing. For AI professionals, it's crucial for several reasons: Firstly, it directly addresses the critical and growing threat of data poisoning in sensitive domains like healthcare IoT, where compromised data can have severe real-world consequences for user well-being and system reliability. By leveraging LLMs for sophisticated anomaly detection and data sanitization, the paper offers a powerful new defense against a pervasive threat.

Secondly, the innovative use of zero-shot and few-shot learning paradigms is a game-changer. It vastly reduces the dependency on large, costly, and often unavailable labeled datasets, making advanced AI security solutions far more adaptable and deployable in dynamic, resource-constrained environments typical of IoT. This approach democratizes access to robust AI defenses, making cutting-edge security less reliant on massive data infrastructure.

Finally, this work underscores the evolving versatility of LLMs as sophisticated reasoning engines. Their ability to "role play" as domain experts and "think step-by-step" to infer poisoning indicators from raw, non-linguistic sensor data fundamentally expands their utility. This opens new frontiers for hybrid AI systems that can intelligently contextualize, analyze, and secure diverse data streams, moving LLMs from purely generative tasks to critical real-time decision-making in high-stakes environments. This isn't just about securing IoT; it's about building more resilient, trustworthy, and adaptable AI systems across the board.

Advertisement