- Understanding the AI
- How can I add alternate invocations or different ways to phrase a question?
- Why can't we change the top-level question wording of general library questions?
- Why can't I add every program or location to the chatbot?
- What are intents and entities?
- How does the chatbot respond to self-harm questions?
- How does Ocelot handle gender/ racial biases within the AI?
- Moderation Queue
- Integrated Questions
Does Ocelot review general library content on a regular basis?
Ocelot's Compliance and Content Review Committee oversees the monitoring, updating, and implementation of Ocelot product content related to higher education compliance and best practices.
The committee creates and revises content which may include creating new chatbot templates, chatbot general library content, and general library video content.
The following topics are not within the scope of the committee to monitor for compliance within the chatbot:
- Policies related to state higher education
- Client custom chatbot questions, including client edits of general library content
- Human Resources
Is there a character limit for responses?
There is no character limit for chatbot responses. However, Ocelot recommends the following best practices in order to provide the best user experience.
For best practices on writing a response, review the Editing the Knowledge Base Best Practices article.
Can I get a spreadsheet of the knowledge base?
Due to the proprietary status of Ocelot's general library knowledge base, we are unable to export it into a separate file; however, we can add additional users to your admin portal to help navigate the knowledge base. With proper internal organization, the built-in filters will allow you to accomplish the same tasks needed through the context of a spreadsheet.
Clients can download a CSV file of their custom questions on the Custom Questions page.
What is the average number of custom questions per department that clients have?
The average department creates about 200 custom questions. Custom questions include any new questions a client has created that are specific to their institution or organization. Custom questions also include completed question templates and client customizations of general library questions.
Understanding the AI
How can I add alternate invocations or different ways to phrase a question?
Alternate invocations are similar questions or statements with the same goal.
Question: How do I apply for financial aid?
Alternate Invocation: What is the application process for financial aid?
Adding alternate invocations to questions helps your chatbot understand the many different ways users might ask the same question. The Moderation Queue team adds alternate invocations to the general library and client custom questions on behalf of our clients so they can spend less time thinking about how their users might ask questions and can focus that energy on developing great content. On any given day, the Moderation Queue team creates hundreds of examples in addition to those submitted by our clients.
Every time a client submits a new question, the question comes through the moderation queue, where it is evaluated for functionality and potential AI conflicts. As the Moderation Queue team reviews a newly submitted question, they determine if the question needs to be connected to an existing general library or client custom question.
In cases where the Moderation Queue team connects a new question to an existing question, they are adding an alternate invocation based on this new phrasing. If the team is not able to connect a new question to an existing question, they add several alternate invocations to this new client custom question on the client's behalf.
If you think you may have a new way to ask a question, be sure to test your phrasing first to see if your chatbot understands your question. You can test your phrasing by selecting the Test my Bot icon in the upper right-hand corner of the admin portal home screen. If you receive an answer from the knowledge base or relevant suggestion boxes then your chatbot understands what you asked and you have had a successful interaction.
If you receive an IDK (“I Don’t Know”) response to a question you would like the chatbot to be able to answer, select the Add a Response + button at the bottom of the interaction. You can then create a new question and response. Saving this question will submit it to the moderation queue for the Moderation Queue team to review.
You do not need to create and submit every possible phrasing of a question. Submitting a few examples helps to train our AI model. Over time, your chatbot will continue to learn additional phrasings.
Why can't we change the top-level question wording of general library questions?
Top-level questions are the question titles that show in the knowledge base. These are the titles that appear in suggestion boxes and the Explore feature for end-users. The top-level questions represent a singular goal with many similar questions or statements behind the scenes that are not visible in your admin. In your knowledge base, you will see the top-level questions and Ocelot manages the many alternate phrasings that live behind the scenes.
Client-specific responses are tied or linked to similar general library questions so they benefit from our global AI training. When the top-level question wording of a general library question is changed, it breaks the link between the general library and client-specific version of that question.
Why can't I add every program or location to the chatbot?
Ocelot teaches our AI model about the numerous program options at a school by creating comprehensive questions such as "What majors or programs are available?" and training it with hundreds of program-specific examples such as “Do you offer nursing?” In essence, we are combining programs into one broad question as we have found it greatly increases the chances of users getting the information they need and is significantly less upkeep for our clients. If we did not make this decision, there would be thousands of program offerings and questions for clients to review individually, and it would stretch our AI model in a manner that would make it less effective for users.
Similarly, Ocelot has trained our AI model to recognize numerous location options by creating broad questions such as "Where is the school located?" and training it with specific examples such as "How many campuses does the school have?"
Additionally, chatbot users may not know the exact title of an institution's specific program or precise campus location, and customizing the response to these comprehensive questions provides a much better user experience. For example: an "English" major may be called "English Language and Literature," “English Literature," or "English Language and Writing" etc. A prospective student is the most likely to just ask about an institution's "English major." Likewise, users from outside the Chicagoland area may not make a distinction between the city of Chicago and the city of Naperville and may use "Chicago" to reference both.
With this structure in mind, we cannot add individual questions about programs, classes, or locations since we do not have the framework and training data to effectively parse out each and every program, course option, or location name as individual questions. Our current structure provides the highest chance for a user to get the information they need through a direct knowledge base response, suggestion box, or search link.
How does the chatbot respond to self-harm questions?
Ocelot has trained our AI model to identify language around self-harm and provide a standard response. Because the chatbot is neither human nor a trained healthcare or counseling professional, the response directs users to a 24/7 resource that is able to provide these things:
All calls to the Lifeline are routed to the Lifeline center closest to the caller's area code which can provide them with local resources after the immediate crisis has passed.
How does Ocelot handle gender/ racial biases within the AI?
Ocelot implements industry best practices to control bias. Our chatbot does not identify users by race, ethnicity, nationality, gender, or religion. We train our AI to identify a user’s stated goal and topic and then provide answers and/or suggestions. We also take additional steps to reduce bias:
Diverse Teams: We ensure that our algorithms, content, and maintenance of our AI model are supervised and written by diverse voices.
Equitable Data: Our datasets and training are based on more than 11 million user interactions across more than 400 institutions, including Historically Black Colleges and Universities (HCBU) and Hispanic-Serving Institutions (HSI), non-native English speakers, and users outside of the United States.
Supervised Learning: Our AI Conversation Design team monitors our model and user interactions to ensure responsible and equitable growth.
What videos will be displayed in the chatbot?
The chatbot will display standard video portal content either in the response directly, or as a suggested video when the chatbot determines the video is related to a question the user asked.
If you have created custom videos in your video portal, they will be displayed in your chatbot if you have created a custom question in your chatbot and a member of the Ocelot team has attached the custom video to the custom question.
Custom videos in your video portal will not be displayed as suggested videos in the chatbot.
Can I add videos to the Explore bar?
Videos cannot be added to the explore bar, but they can be added to play within the question response.
To add a video to a chatbot response, submit a support ticket.
What are the Ocelot moderators looking for in the moderation queue?
When a new question is submitted, the Moderation Queue team reviews the question to ensure your chatbot is operating at its best. Our goal is to take our collective 200+ years of higher education knowledge and experience and apply it to our AI model. The Moderation Queue team takes on the management of our AI model so clients can focus on creating the most effective content for their users.
Here are the steps we take:
- Testing: Moderators test each submission for relevant suggestion boxes and conflicts with existing general library or client-specific content. If a client submits a question that matches existing content, moderators will need to combine or override the question in order to maintain Ocelot's existing AI foundation.
- AI Evaluation & Training: Moderators evaluate each submission to determine if we can train our AI model to match it with existing general library or client-specific content. If the submission does not have a high enough degree of relevancy or level of detail that matches existing content, moderators will leave it as a standalone question.
- Similar Phrasings (Alternate Invocations): Moderators add multiple ways of asking the question on behalf of our clients. Alternate invocations teach the AI to understand similar questions or statements with the same goal, as well as additional phrasings and keywords to help your chatbot provide a 1:1 response, relevant suggestion boxes, or website links. On any given day, Ocelot creates hundreds of examples in addition to those submitted by our clients.
We do not review or edit responses, review dates, content locking, content sharing, or expiration dates.
In the Moderation Queue, why do many of the questions say last edited by user "Ocelot"?
When a client submits new content, the Moderation Queue team reviews the question to ensure their chatbot is operating at its best. The moderation queue shows the status of a client's custom questions as they're being reviewed by this team.
The User field indicates the last person who edited the question. If "Ocelot" is the User, it means that a member of the Ocelot team is the last person who handled the question; this most often occurs as part of the process of the team reviewing and making the question live in the client's knowledge base.
How long are questions in the moderation queue?
When a client creates a new custom question or changes the wording of an existing custom question, it will move into the moderation queue to be reviewed by Ocelot staff. The Moderation Queue team ensures that the question will work in the client’s chatbot and provide a response within the parameters of the AI model.
Moderators review new questions for AI conflicts, and add multiple alternate invocations, or similar phrasings, in order to train the chatbot to understand a variety of ways that users might ask the same question. This process helps ensure users will receive the correct response to the new question regardless of how it is asked.
Questions in the moderation queue will be reviewed within 2 business days. Please allow the full 2 business days before testing any newly submitted questions in your chatbot.
What is the purpose of the Webhook fields in the chatbot question editing pages?
The webhook fields can be found on the editing page when either creating a new custom question or editing an existing question. These webhooks are used when integrating the chatbot with your campus Student Information System (SIS) to provide personalized responses.
Ocelot has set up over 45 web capabilities for common questions about Financial Aid, Admissions, Advising, Student Services, and more. These web capabilities help schools quickly connect the SIS to the chatbot to help answer the most frequently asked questions.
Ocelot can integrate with Peoplesoft Campus Solutions, Ellucian Banner, Ellucian Colleague, and Anthology Student. Other SIS integrations will be available in the future.
For more information on SIS integrations, review the Student Information Systems (SIS) Integration with Ocelot article.