• LLM - decoders, theres no concept of embeddings as of yet
  • you tell the LLMs that these are my signals through prompting with Metadata, tempo
  • decoder embeddings were not needed, chatgpt can find recommendations based on what you mentioned in prompts
  • Ranking: pass in more information, more features, what is USers preferences, more text
  • NLP -> prompting
    • prompt with demographics, user location, seasonality, language preference, and priors that went well, with those recommendations include that data
    • deduce topic of the song, heartbreak. BertTopic, Top2Vec,
    • new songs it doesn’t know, cold start item, feed it into the prompt
    • or feed it into the rank, fixes training cut-off
  • Standard pipeline recommender systems: retrieval and ranking
  • Retrieval: find items that are similar to either a query or if no query than what we had in the past

  • QA -> LLM, they are all generalist, neural retrieval
  • Embeddings-> collaborative & content filtering embedding
  • software system to retrieve the user’s profile + the users prompt. Prompt gets constructed via software pipeline, string concatenation excercise