This website uses cookies to enhance your user experience and improve the quality of our site. By continuing to use this website, you consent to the placement and use of cookies as described in our Cookie Policy.

Machine learning-powered tools to protect digital communities from hate and harassment

Our technology

Abuse online drives churn, erodes brand trust, and increases the risk of offline harm to users. Sentropy provides easy access to cutting-edge technologies that can detect harmful, toxic behaviors before they permanently harm users or impact digital communities.

Proactive Detection

Detect abuse before receiving a user's report. Our tools help detect and categorize toxicity across your entire platform - you take action as you see fit.

Community-Scale Intelligence

Go beyond one-off decisions. We combine linguistic understanding with network analysis to improve moderation efficiency and consistency.

Advanced Warning

Stop worrying about "the next thing." Sentropy defends against new forms of abuse by learning from harmful content across the web.

Scalable, Continuous Learning

Natural language understanding made easy. We handle the infrastructure, data engineering, and modeling needed to learn as fast as communities evolve.

Rapid, Efficient Customization

Amplify the impact of existing data. We offer customized models that leverage existing data assets such as pattern lists, moderation logs, or blacklists.

Flexible Deployments

Engage on your terms. Our API and analytics products are available via cloud or on-premise deployments with volume-based pricing.

State-of-the-art abuse detection with just a few lines of code

Example API Call
curl -X POST \ \
 -H 'Authorization: Bearer ${TOKEN}' \
 -d '{
   "text": "rl go to their offices in la and shoot every last one of them",
   "id": "4m8aw",
   "author": "infamouz",
   "segment": "general_chat"
    "id": "4m8aw",
    "label_probabilities": {
        "IDENTITY_ATTACK": 0.00002,
        "PHYSICAL_VIOLENCE": 0.92623,
        "SEXUAL_AGGRESSION": 0.00004,
        "NEONAZISM": 0.14425

Meet our team

Our team has years of experience working together and developing technology to solve meaningful challenges at Apple, Microsoft, Palantir, Sift, and Lattice Data. We’re on a mission to build tools that create a healthier and more inclusive online experience for everyone, and we believe a team with diverse experience has the best chance of making the internet a safer place for everyone.

Michele Banko
Ethan Breder
Software Engineer
Ian Lin
Software Engineer
Brendon MacKeen
Data Analyst
Emma Peng
Machine Learning Engineer
Sean Pyne
Business Development
Laurie Ray
Data Analyst
John Redgrave
Taylor Rhyne
COO, Product
Ryan Riddle
Software Engineer
Shahin Saneinejad
Machine Learning
Infrastructure Engineer
Alex Wang
Machine Learning
Engineering Intern
Cindy Wang
Machine Learning Engineer
Andrew Ying
Machine Learning
Engineering Intern
Interested in our mission? We're hiring.