A less wasteful way to train large language models, such as the GPT series, finishes in the same amount of time for up to 30% less energy, according to a new study from the University of Michigan.
{Categories} _Category: Applications{/Categories}
{URL}https://techxplore.com/news/2024-11-power-ai-software-tool.html{/URL}
{Author}unknown{/Author}
{Image}https://scx1.b-cdn.net/csz/news/tmb/2024/up-to-30-of-the-power.jpg{/Image}
{Keywords}Software{/Keywords}
{Source}Applications{/Source}
{Thumb}https://scx1.b-cdn.net/csz/news/tmb/2024/up-to-30-of-the-power.jpg{/Thumb}