For context on this, Copilot, as with all these AI systems, is effectively a black box system trained on a data set at some point in time. Post the initial training set, it can access and include any public domain information.
As such, it is virtually impossible for any individual (even those who programmed it) to 'see' the process between input and output for a given query. Hence, it is almost impossible to train it to tell the truth, or make it relay facts, or really influence or control what information it presents in a predictable fashion.
I expect that Microsoft have realized that with the huge amount of rather suspect information floating around the web on this topic right now, the AI model is throwing out blatantly incorrect misinformation, or at least nothing with any informational value. Garbage in, garbage out and all that.
I believe you'll also get this response if you ask it about various very recent, divisive news topics, or stock tips, or ongoing court cases prior to a verdict, or for an opinion on a political figure. I've not confirmed that though.
Anyway, on those two topics they seem to have made the socially responsible choice to just straight up bounce any such queries, instead of letting it percolate unpredictable, probably misleading information from unreliable sources. The statement about the data set being circa 2021 was probably the original boiler plate 'we won't answer that question' statement they programmed back in 2021 at launch, and haven't updated it since.
For recent stuff that's not subject to quite as much 'noise', or where there is some consensus, or topics that ultimately aren't that damaging if misleading information is provided (eg historical information, sports results, weather, entertainment, celebrity news, published science etc) they'll happily let it answer, even if the results are potentially also unreliable, hence you will get information on topics post 2021 on that stuff.
Good on Microsoft on this one. If only Facebook had such morals with their news algorithms...