home
groups
about
login
help
technology@lemmy.ml
QwQ-32B is a 32 billion parameter language model achieves comparable performance to DeepSeek-R1 with 671 billion parameters, using reinforcement learning for scaling
(
qwenlm.github.io
)
from
yogthos@lemmy.ml
to
technology@lemmy.ml
on 05 Mar 23:38
https://lemmy.ml/post/26782151
#technology
threaded -
newest
threaded - newest