The problem is that in actual fact, there is no controversy around the physical assassination attempt on Trump. Someone tried to put a bullet in his head.
I completely understand that TDS is a real thing, and fully 1/3 of democrats believe it was staged - and I'd add that these are people who are grossly ignorant of shooting and ballistics. We might just as well say, based on the number of flerfers, that the idea of the earth being a globe is controversial.
AI, in its current iteration, isn't actually intelligent, it's just a fast aggregation algorithm of google/bing search queries.
Actual AI can do what humans do (even if we do it poorly) if it can:
- observe
- hypothesize
- test hypothesis
Iterate hypothesize and test
ad infinitum until something is discovered to be true or all hypotheses are exhausted.
In its current iteration, AI works pretty well on quantifiable and finite sets. Actual discovery and truth still remain the domain of humans.
I would agree that there is no controversy around the fact that it happened. However, I would also urge you to consider the difference in how humans parse information from an AI model. There is no controversy from the logical viewpoint of the intelligent, thinking human.
Humans, as you say, can observe, hypothesize and use logic to test conclusions. We are able to think. This process makes the facts you stated about the assassination objectively true. Beyond reproach.
However, AI models do not do those things, at least, not in the same way.
A human can look at information and discern fact from fiction. Not well, but we can. We are able to apply a quite complex series of logical leaps to form intuition. In the 'sky' example for instance, we can go and see with our own eyes. With a news report, we attach more or less weight based on the source based on our own internal biases. We can judge not just information presented, but reliability, and assign internal weightings to a source based on type (see it with your own eyes vs a retweeted FB post for example)
An AI sees things differently. Firstly, it has to deal with WAY more information than we do. If you have all the knowledge on the internet at hand when making a decision, then every possible conclusion is supported by sources. Because every possible conclusion has been published, at least once, somewhere.
It happened, it didn't, it's all a hoax, the chinese were behind it, aliens did it, etc etc. I guarantee that all those 'statements' have been posted hundreds of thousands of times in the past few days.
When that's the case, you can't do what humans do, which is conclude, 'well, all the sources of information I've seen agree, therefore it's true.' Because with the sheer volumes of sources you 'see' as an AI, that's not the case.
You also can't do the other thing that humans do and say 'I trust this source and it says so, so it's true'. That requires us to assign a personal bias to the reliability of the source, something that AI models tend not to do by design. This is for a few reasons.
1. To assign weightings to a source requires a human to judge reliability. There are too many sources, and too much being published every second of every day for that to be possible.
2. Having a human assign bias weightings INTRODUCES bias in the model. This is evident. I don't want some 23 year old junior software engineer in CA deciding how reliable statements that Fox News, or Donald Trump make are, and then telling an AI model to weight those sources less strongly than CNN when giving me information. I probably wouldn't consider their bias weightings even remotely accurate and the model is ultimately going to serve all of humanity...
As such, AI models tend to rely on
number of data points to assess reliability. You can see the issue with that for a situation where, as you say, a lot of people, especially those creating content online, have a very different version of events than reality. If they're creating the content, they provide more data points, so they get a higher weighting. Even if they're spewing complete crap.
Finally, an AI is not able to use logic and intuition to say 'This I know to be true, that I think to be true, this seems unreliable, I'll only present what I know to be fact'. That's a higher order skill based on fairly complex logic and reasoning, and AI models can't do it, excepting perhaps by choosing to only present information that hits x% consensus in overall data points, or x number of total data points. That then loops back to what is the overall weight of data being created, not 'is that data true'.
In the end, the AI model is as you say, a collation of google/bing results. You know as well as I do just how much bollocks is contained in those pages. But once the AI has collated it and served it up as 'fact', you lose the context on where the info originally came from, which makes it harder for you, the human end user, to discern fact from fiction. It assigns a veneer of authenticity, even if the original source for the info presented was some crazy left wing nutjob on Reddit who you'd dismiss out of hand otherwise.