22 Commits (51c5e881e66822fb3cf64d7b191a13e518212b84)

Author SHA1 Message Date
Hendrik Langer 61dd317915 oobabooga runpod worker 2 years ago
Hendrik Langer b7ea267c36 check all stop words 2 years ago
Hendrik Langer deadd56a6d add one more stop word 2 years ago
Hendrik Langer 97eb29190e double the pseudo-streaming chunk size. nearly every reply takes more than 16 tokens. 2 years ago
Hendrik Langer 5703dcd175 rewrite model helpers 2 years ago
Hendrik Langer 6f23a91970 don't parse error messages 2 years ago
Hendrik Langer 8213d50f15 don't recalculate every time. use a window and remove a chunk of chat history when we get near the token limit 2 years ago
Hendrik Langer e24c276126 longer timeout for local koboldcpp 2 years ago
Hendrik Langer cd15d6ca61 fix <START> code word 2 years ago
Hendrik Langer 9afb0a1d32 use pygmalion again 2 years ago
Hendrik Langer 88fb1fb6df more stop words 2 years ago
Hendrik Langer 683df2ff25 add more end tokens for koala 2 years ago
Hendrik Langer 0987efd00b add another end token 2 years ago
Hendrik Langer 84a49007b9 llama prompt try 1 2 years ago
Hendrik Langer e4fdef932d postprocessing in correct order 2 years ago
Hendrik Langer 84bc0fac90 prompts and reply postprocessing 2 years ago
Hendrik Langer 2b0fcd77d5 llama style prompts 2 years ago
Hendrik Langer eabc641320 make error message start with <ERROR> so they get excluded from chat log 2 years ago
Hendrik Langer a985bffad0 more f's 2 years ago
Hendrik Langer 8b2e608602 tuning 2 years ago
Hendrik Langer 9d5f2de7a5 implement pseudo-streaming 2 years ago
Hendrik Langer 7e9918a06d add local koboldcpp generation 2 years ago