A Probot app to deliver notifications of toxic comments
A GitHub App built with Probot to deliver notifications of toxic comments
Listens for new or edited issues, pull requests, or comments. It sends the content of those events to a semantic analysis API that can rate the content on multiple sentiment axes. If the content is rated above a threshold on any axis, a notification email is sent to humans to investigate and decide whether to take action.
This Probot app reads its configuration from two files:
.github repository under the user or organization it is installed in from the .github/biohazard-alert.yml file.github/biohazard-alert.yml fileConfiguration settings are:
notifyOnError: true means that notifications are generated when errors are encountered (default true)skipPrivateRepos: true means that events from private repositories will be ignored (default true)threshold: Analysis ratings higher than this number will generate notifications (default 0.8)This app uses Google’s Perspective API to analyze the content using the following models:
TOXICITYSEVERE_TOXICITYIDENTITY_ATTACKINSULTPROFANITYTHREATSEXUALLY_EXPLICITFLIRTATIONUNSUBSTANTIAL# Install dependencies
npm install
# Build the app
npm run build
# Run the bot locally
npm run dev
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.