When a client submits new content, a team of AI Conversation Design specialists moderate the question to ensure your chatbot is operating at its best. Our goal is to take our collective 200+ years of higher education knowledge and experience and apply it to our AI model. The moderators take on management of our AI model so clients can focus on creating the most effective content for their users. Here are the steps we take:
1. Testing: Moderators test each submission for relevant suggestion boxes and conflicts with existing General Library or client-specific content. If a client submits a question that matches existing content, moderators will need to combine or override the question in order to maintain Ocelot's existing AI foundation.
2. Training: Moderators evaluate each submission to determine if we can train our AI model to match it with existing General Library or client-specific content. If the submission does not have a high enough degree of relevancy or level of detail that matches existing content, moderators will leave it as a standalone question.
3. AI Evaluation: Moderators evaluate each submission for intents and entities, which are the backbone of chatbot design. Ensuring a submission is properly classified creates guardrails to prevent your chatbot from providing users with incorrect responses.
4. Similar Phrasings (Alternate Invocations): Moderators add multiple ways of asking the question on behalf of our clients. Alternate invocations teach the AI to understand similar questions or statements with the same goal, as well as additional phrasings and key words to help your chatbot provide a 1:1 response, relevant suggestion boxes, or website links. On any given day, Ocelot creates hundreds of examples in addition to those submitted by our clients.
We do not review or edit responses, review dates, content locking, content sharing, or expiration dates.