Hendrik Langer
|
1b4922a68e
|
add runpod oobabooga worker
|
2 years ago |
Hendrik Langer
|
76ed45f9d4
|
test LORAs
|
2 years ago |
Hendrik Langer
|
ec05073000
|
more stable diffusion parameters
|
2 years ago |
Hendrik Langer
|
ef7a60e2d2
|
remove chunk window for gpu services
|
2 years ago |
Hendrik Langer
|
9198a9d7f1
|
upscaling model
|
2 years ago |
Hendrik Langer
|
54a8df107c
|
evaluation
|
2 years ago |
Hendrik Langer
|
b7ea267c36
|
check all stop words
|
2 years ago |
Hendrik Langer
|
6c5a906f3d
|
fix
|
2 years ago |
Hendrik Langer
|
deadd56a6d
|
add one more stop word
|
2 years ago |
Hendrik Langer
|
97eb29190e
|
double the pseudo-streaming chunk size. nearly every reply takes more than 16 tokens.
|
2 years ago |
Hendrik Langer
|
d28b86294d
|
koboldai-style prompt
|
2 years ago |
Hendrik Langer
|
e20acf2705
|
NSFW mode
|
2 years ago |
Hendrik Langer
|
5703dcd175
|
rewrite model helpers
|
2 years ago |
Hendrik Langer
|
2748a27f95
|
fix
|
2 years ago |
Hendrik Langer
|
6f23a91970
|
don't parse error messages
|
2 years ago |
Hendrik Langer
|
8213d50f15
|
don't recalculate every time. use a window and remove a chunk of chat history when we get near the token limit
|
2 years ago |
Hendrik Langer
|
ce67251879
|
update stablehorde integration (1/2)
|
2 years ago |
Hendrik Langer
|
0d0507d649
|
regex work
|
2 years ago |
Hendrik Langer
|
e24c276126
|
longer timeout for local koboldcpp
|
2 years ago |
Hendrik Langer
|
53d372d11d
|
make keywords case insensitive
|
2 years ago |
Hendrik Langer
|
6459865f07
|
implement keywords
|
2 years ago |
Hendrik Langer
|
cd15d6ca61
|
fix <START> code word
|
2 years ago |
Hendrik Langer
|
9afb0a1d32
|
use pygmalion again
|
2 years ago |
Hendrik Langer
|
4f973609c6
|
message redaction
|
2 years ago |
Hendrik Langer
|
88fb1fb6df
|
more stop words
|
2 years ago |
Hendrik Langer
|
683df2ff25
|
add more end tokens for koala
|
2 years ago |
Hendrik Langer
|
0987efd00b
|
add another end token
|
2 years ago |
Hendrik Langer
|
87b5ce3f49
|
make ai_name exchangable
|
2 years ago |
Hendrik Langer
|
84a49007b9
|
llama prompt try 1
|
2 years ago |
Hendrik Langer
|
e4fdef932d
|
postprocessing in correct order
|
2 years ago |
Hendrik Langer
|
e05c628fd6
|
bot greeting and example prompts
|
2 years ago |
Hendrik Langer
|
84bc0fac90
|
prompts and reply postprocessing
|
2 years ago |
Hendrik Langer
|
2b0fcd77d5
|
llama style prompts
|
2 years ago |
Hendrik Langer
|
eabc641320
|
make error message start with <ERROR> so they get excluded from chat log
|
2 years ago |
Hendrik Langer
|
a985bffad0
|
more f's
|
2 years ago |
Hendrik Langer
|
bd2c0aa043
|
more robust model file search
|
2 years ago |
Hendrik Langer
|
8b2e608602
|
tuning
|
2 years ago |
Hendrik Langer
|
9d5f2de7a5
|
implement pseudo-streaming
|
2 years ago |
Hendrik Langer
|
7e9918a06d
|
add local koboldcpp generation
|
2 years ago |
Hendrik Langer
|
72adbf7315
|
generate multiple answers
|
2 years ago |
Hendrik Langer
|
aaa1b72877
|
add example gpu cloud services
|
2 years ago |
Hendrik Langer
|
77e091fddb
|
remote worker
|
2 years ago |
Hendrik Langer
|
fb826a4ae5
|
add test command for custom remote endpoint
|
2 years ago |
Hendrik Langer
|
b8144c1c4c
|
update dependencies
|
2 years ago |
Hendrik Langer
|
e5956b76e6
|
fix missing f
|
2 years ago |
Hendrik Langer
|
6b577eba11
|
name endpoints
|
2 years ago |
Hendrik Langer
|
91d3c8c192
|
more stable diffusion endpoints
|
2 years ago |
Hendrik Langer
|
316f50fdb5
|
prepare RWKV
|
2 years ago |
Hendrik Langer
|
609c9901d9
|
more tests on the remote workers
|
2 years ago |
Hendrik Langer
|
bd4fe4bb63
|
try other quantized model
|
2 years ago |