-
Notifications
You must be signed in to change notification settings - Fork 45
Changes default cache embedding model #326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! one comment pending
@@ -111,8 +111,19 @@ def __init__( | |||
if dtype: | |||
vectorizer_kwargs.update(dtype=dtype) | |||
|
|||
# raise a warning to inform users we changed the default model | |||
# remove this warning in future releases | |||
logger.warning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the warning
below in the case of pre-existing index AND overwrite=True, I don't think we actually need this one here right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right that a mismatch won't happen in that case, but there still is a behavior change when we switch default models. It's why I had to update the tests because the embedding distances changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone goes from 0.5 to 0.6 and never specifies a vectorizer, and runs the same script start to finish - writes/reads/clears - they may get different results.
Changes the default semantic cache embedding model to our fine tuned model.
Changes the default semantic cache embedding model to our fine tuned model.
Changes the default semantic cache embedding model to our fine tuned model.