- Polite Guard looks to help keep chatbots polite and less prone to exploitation
- With NLP at its core, it works by classifying text on a four point scale of politeness
- The dataset and source code are available on GitHub and Hugging Face
Intel has unveiled Polite Guard, an open source AI tool aimed at assessing the politeness of a text and allowing AI chatbots to remain consistently polite to customers.
In a post to the Intel Community Blog, the latest addition to Intel’s AI portfolio hopes to provide a standardized framework for evaluating linguistic nuance in AI-driven communication.
Leveraging natural language processing (NLP), Intel claims that Polite Guard, in classifying text into four different categories of polite, somewhat polite, neutral and impolite, helps mitigate AI vulnerabilities by “providing a defense mechanism against adversarial attacks”.
Intel Polite Guard’s role for SMBs
According to Intel, Polite Guard reinforces system resilience by ensuring consistent polite output even when handling potentially harmful text.
The company hopes that this approach will “[improve] customer satisfaction and loyalty” for businesses implementing it.
Released under the MIT license, Polite Guard grants developers the flexibility to modify and integrate it into their own projects.
Its dataset and source code are available on Github and Hugging Face, with further developments to be published via the Intel Community blog.