Skip to main content
Version: 2025-02-27

FAQs

What if I’m getting “not enough information” back as a response?

First, refer to the Prompting Overview to iterate through different prompts. More often than not, prompting the model in a different way brings back the desired response. If the model still does not return the expected answer, double-check that your document contains the desired information. If the model still struggles, reach out to our sales team to understand what next steps should be taken.

What if the confidence score is quite low?

Refer to our page explaining the confidence score.

What if the prompt is giving me a wrong or made-up answer?

Our model generally performs better than most in not making up answers. If you’re facing this as an issue, try writing your prompt in the format of “if there is [x], what is [x], otherwise, return “n/a.”” This most often resolves any issue you’re facing.

What if the model is pulling information from the wrong part of the document?

You can use the context window of the response to examine where in the document the model is pulling your answer from. If you find that the model is looking at the wrong part of the document to find an answer, directing it to the correct section can often help it retrieve the right answer. Check out our Prompting Overview for more information on how to do this.

What if I receive an error that Chat Response Failed when prompting the VKG?

This error may occur in the UI environment when asking a chat message to a VKG. Some prompts may work, but others (especially long output prompts: list this…, give me all the steps for…, etc.) may fail.

The reason for is error is due to large node sizes in the VKG that cause the model to have issues when trying to output responses to certain prompts.

To fix this, check that all nodes are around 512 tokens (~800 characters). Recreate the VKG with smaller nodes if needed.

What if I encounter an error indicating my file is too large?

Please reach out to support@lazarusai.com or your Lazarus representative to get access to a feature that removes API ingest constraints that guard against runaway processes with larger input files.

Supported Languages

CodeLanguage
BGBulgarian
CSCzech
DADanish
DEGerman
ELGreek
ENEnglish (all English variants)
ESSpanish
ETEstonian
FIFinnish
FRFrench
HUHungarian
IDIndonesian
ITItalian
JAJapanese
KOKorean
LTLithuanian
LVLatvian
NBNorwegian Bokmål
NLDutch
PLPolish
RORomanian
RURussian
SKSlovak
SLSlovenian
SVSwedish
TRTurkish
UKUkrainian
ZHChinese (all Chinese variants)