One tenth of one percent. That is all it takes. Out of billions of parameters inside a large language model, only a tiny, specially chosen sliver, roughly 0.1%, carries the information that actually matters when the model needs to learn something new. The rest is, in a sense, dead weight during the update. Recognising this has led a team of researchers at Stevens Institute of Technology to an algorithm they call MEERKAT, which might, perhaps mor…
This story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.