VKG Prompting
You can either directly query the knowledge graph (non-conversational) or chat with the VKG through RikY-Citation.
Direct Querying
Direct querying the VKG is a semantic search, and does not require full sentences. It can be used to confirm VKG content, often when checking a newly-created VKG or confirming uploads of new nodes. It can also be helpful for filtering data by pulling specific nodes that can then be used as focused input for our language models.
For example, enter the topic name you want to filter for into the search bar, and you’ll see a list of nodes that match that topic.
RikY-Citation
RikY-Citation is used to have natural language conversations with the VKG data. The citation feature makes it simple to validate information when needed. This way, a user can validate if the model is pulling the correct information and tune the prompt accordingly. Users may not utilize every citation for further reading, but it is often very helpful.
Example prompt:
"You are a benefit customer support resource dealing with an existing customer question. Provide your answer in two distinct sections based on the source of the information, namely the 'benefit plan details' and 'implementation guide': Provide the following information:
- Online Portal Name
- Online Portal Link
- IT Support Contact Information"
Tips for Prompting VKG with RikY-Citation
- To control the knowledge boundaries between VKG and non-VKG data, you can add an explicit instruction at the end of the prompt to give the option to say that the answer is not found in this knowledge base. For example: "My question is if penguins fly - but if that information is not included in this VKG please just state that "the question is not relevant."
- You can ask RikY-Citation to provide answers in a certain manner or tone. For example, you can add "regardless of the style of the question, provide answers in a polite and professional manner."
- With nodes that have a question and an answer, for example forms, restating the question in the node will yield accurate answers.
Quantitative Prompting
A common obstacle when it comes to prompting the VKG to quantify is that the LLM cannot accurately calculate quantitative numbers that are not explicitly defined based on the VKG content. For example, a VKG with only all the capitals for every state in the US would not be able to ask the LLM to accurately extract the number of states in the US. To combat this, here are two tips to follow:
- Extract a list. If a user wants an understanding of what the quantitative value might be, they should prompt the LLM to list all corresponding values rather than prompt for the total of all the values.
For example, asking the VKG with only capitals to list all the capital-state pairs would yield a better understanding than asking for how many states there are in the US.
- Embed the quantities. Create a node to store the quantity as a discrete value. If the LLM is expected to be able to answer quantitative questions, those quantitative values should be explicitly stated in the VKG.
For example, if the VKG of capitals has one node that said there were 50 states in the US, the LLM would always answer that question correctly.