Toxic text detection

This demo examines a text for its toxicity. To use the demo please write a sentence or pick one of our suggested samples.

Explanation of the demo

Toxic language is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. In this demo, we are showing how trained neuronal networks can be used to automatically detect such language in online comments. Furthermore, we give estimates for mentioned identity groups (e.g. male and female) and three examples of the used training data set, which are the closest to the entered text.

Demo information

Date
May 2019
Author
Johannes Mayrhofer
Collaborator
Related