I like the color temperature and brightness of my lights responding to the time of day too much in order to go with smart switches over smart lights
I like the color temperature and brightness of my lights responding to the time of day too much in order to go with smart switches over smart lights
Okay, different example. If a country dropped a couple of wounded soldiers without weapons over another country’s territory, would you call that an invasion?
If someone threw the dead body of a robber into a store, would you also call that store being robbed?
That quote is not from the first one though, it’s from the second one
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens