

Since a prompt is a text that we give as input to an AI model to generate a desired output or response, we need to consider the length of the input text based on the token limit of the model we choose to use. Why are embeddings important with LLM AI? Semantic memory is not likely to give you an exact match - but it will always give you a set of matches ranked in terms of how similar your query matches other pieces of text. This is similar to when you make a search query on Bing, and it gives you multiple results that are proximate to your query. And when a query is performed, the query is transformed to its embedding representation, and then a search is performed through all the existing embedding vectors to find the most similar ones. So basically you take a sentence, paragraph, or entire page of text, and then generate the corresponding embedding vector. Embeddings are useful for AI models because they can capture the meaning and context of words or data in a way that computers can understand and process. This helps us measure how related or unrelated they are, and also perform operations on them, such as adding, subtracting, multiplying, etc.


The idea is that similar words or data will have similar vectors, and different words or data will have different vectors. High-dimensional means that the space has many dimensions, more than we can see or imagine. Vectors are like arrows that have a direction and a length. Semantic memory search: You can also represent text information as a long vector of numbers, known as "embeddings." This lets you execute a "semantic" search that compares meaning-to-meaning with your query.Įmbeddings are a way of representing words or other data as vectors in a high-dimensional space. When you have a lot of information to store in a key-value pair, you're best off keeping it on disk. The lookup is "conventional" because it's a one-to-one match between a key and your query.Ĭonventional local-storage: When you save information to a file, it can be retrieved with its filename. We access memories to be fed into Semantic Kernel in one of three ways - with the third way being the most interesting:Ĭonventional key-value pairs: Just like you would set an environment variable in your shell, the same can be done when using Semantic Kernel. Memories are what make computation relevant to the task at hand. For with just a CPU that can crunch numbers, the computer isn't that useful unless it knows what numbers you care about. Historically, we've always called upon memory as a core component for how computers work: think the RAM in your laptop. Memories are a powerful way to provide broader context for your ask.
