Microsoft CTO Kevin Scott Thinks LLM ‘Scaling Laws’ Will Hold Despite Criticism

An anonymous reader quotes a report from Ars Technica: During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. "Despite what other people think, we’re not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."
LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI’s AI development philosophy. Scott’s comments can be found around the 46-minute mark.

Read more of this story at Slashdot.

{Categories} _Category: Takes{/Categories}
{URL}https://slashdot.org/story/24/07/15/2032259/microsoft-cto-kevin-scott-thinks-llm-scaling-laws-will-hold-despite-criticism?utm_source=rss1.0mainlinkanon&utm_medium=feed{/URL}
{Author}BeauHD{/Author}
{Image}{/Image}
{Keywords}ai{/Keywords}
{Source}POV{/Source}
{Thumb}{/Thumb}

Exit mobile version