Ocelot implements industry best practices to control bias. Our chatbot does not identify users by race, ethnicity, nationality, gender, or religion. We train our AI to identify a user’s stated goal and topic, and then provide answers and/or suggestions. We also take the additional steps to reduce bias:


  • Diverse Teams: We ensure that our algorithms, content, and maintenance of our AI model are supervised and written by diverse voices. 
  • Equitable Data: Our datasets and training are based on more than 11 million user interactions across more than 400 institutions, including Historically Black Colleges and Universities (HCBU) and Hispanic-Serving Institutions (HSI), non-native English speakers, and users outside of the United States.
  • Supervised Learning: Our AI Conversation Design team monitors our model and user interactions to ensure responsible and equitable growth.