Politics

Warning, This is a fairly gruesome film clip over on X (another information source avoided by many?) But, it demonstrates a major concern I have with the new battery technology - perhaps like the one (s) in the EV in the garage? Poor bugger was likely taking this one (a lithium battery for an EV bike of a type very popular in Asia) upstairs to charge it in his apartment.

A decade ago. when I was still gainfully employed, my business unit built probably 2/3's of the EOD robots used nationwide by local police forces. They were electric powered and we provided extremely stringent charging and storage guidance with each sale. I see none of that today in consumer products. I am not certain the technology has improved to quite that level of perfection.


Damn near equivalent to a mil grade thermite grenade.
 
I am no longer able to open video of Barcelona Olympics opening in 1992, linked above. (yesterday, was ok)
Moreover, I am not able to open, any other Barcelona 1992 opening video on you tube . (msg "video not available")
Is this normal? Has somebody removed them on purpose?
 
But the AI model doesn't know truth, it only knows what is presented in the data set.

As an example, let's say you ask the AI 'what color is the sky?'

The AI will look at the data in the model set, seek the highest scoring answer based on weighting factors (number of times the point is stated, number of sites where that answer occurs, the times where it was stated, etc) and throw out an answer. If, in the example above, some bot has created a site stating 30 billion times that the sky is red, people found that funny and reshared it to other sites, the AI will tell you 'The sky is red.' It cannot discern truth, nor can it parse 'useful info' from 'useless info', that was the consensus opinion based on the data presented to it.

Taking your query above, if in the public domain there's a load of sites saying its a conspiracy set up by Trump himself, those sites have lots of views, lots of traffic and lots of secondary posts, then the AI is quite likely to present that narrative in its answer. It could also link those sites, or incorrectly give the wrong perpetrator, or accuse a foreign actor based on pure internet speculation, or incorrectly say he was killed, or that it was unfortunate that he wasn't...

It can't discern fact from fiction, it only sees consensus based on whatever weighting factors were assigned in the model. The Microsoft guys can't see the decision web or the sources drawn upon in building the response, and it'll change minute by minute as the data set grows, so they can't stop it spreading anything that the social zeitgeist is pushing. They also can't force it to limit it's answer to verifiable facts such as 'it happened, where it happened, did he survive'.

In cases where the topic of a query is any combination of 'very recent, very politically charged, the subject of a great deal of misinformation, potentially very damaging to persons or movements', it's probably a sensible time for corporate ass covering and simply denying the query in case it spits out something really dicey or really wrong to a huge audience that could have very real implications for persons or events.
The problem is that in actual fact, there is no controversy around the physical assassination attempt on Trump. Someone tried to put a bullet in his head.

I completely understand that TDS is a real thing, and fully 1/3 of democrats believe it was staged - and I'd add that these are people who are grossly ignorant of shooting and ballistics. We might just as well say, based on the number of flerfers, that the idea of the earth being a globe is controversial.

AI, in its current iteration, isn't actually intelligent, it's just a fast aggregation algorithm of google/bing search queries.

Actual AI can do what humans do (even if we do it poorly) if it can:
  • observe
  • hypothesize
  • test hypothesis
Iterate hypothesize and test ad infinitum until something is discovered to be true or all hypotheses are exhausted.

In its current iteration, AI works pretty well on quantifiable and finite sets. Actual discovery and truth still remain the domain of humans.
 
I am no longer able to open video of Barcelona Olympics opening in 1992, linked above. (yesterday, was ok)
Moreover, I am not able to open, any other Barcelona 1992 opening video on you tube . (msg "video not available")
Is this normal? Has somebody removed them on purpose?
Just tried it - opens fone for me.
 
But the AI model doesn't know truth, it only knows what is presented in the data set.

As an example, let's say you ask the AI 'what color is the sky?'

The AI will look at the data in the model set, seek the highest scoring answer based on weighting factors (number of times the point is stated, number of sites where that answer occurs, the times where it was stated, etc) and throw out an answer. If, in the example above, some bot has created a site stating 30 billion times that the sky is red, people found that funny and reshared it to other sites, the AI will tell you 'The sky is red.' It cannot discern truth, nor can it parse 'useful info' from 'useless info', that was the consensus opinion based on the data presented to it.

Taking your query above, if in the public domain there's a load of sites saying its a conspiracy set up by Trump himself, those sites have lots of views, lots of traffic and lots of secondary posts, then the AI is quite likely to present that narrative in its answer. It could also link those sites, or incorrectly give the wrong perpetrator, or accuse a foreign actor based on pure internet speculation, or incorrectly say he was killed, or that it was unfortunate that he wasn't...

It can't discern fact from fiction, it only sees consensus based on whatever weighting factors were assigned in the model. The Microsoft guys can't see the decision web or the sources drawn upon in building the response, and it'll change minute by minute as the data set grows, so they can't stop it spreading anything that the social zeitgeist is pushing. They also can't force it to limit it's answer to verifiable facts such as 'it happened, where it happened, did he survive'.

In cases where the topic of a query is any combination of 'very recent, very politically charged, the subject of a great deal of misinformation, potentially very damaging to persons or movements', it's probably a sensible time for corporate ass covering and simply denying the query in case it spits out something really dicey or really wrong to a huge audience that could have very real implications for persons or events.
I know nothing about copilot but isn't it just an AI source to gather information for me? Maybe I WANT to know what the current BS is that is being reported regarding the assassination attempt. If it told me that it was a hoax I would be able to determine that the current majority of posts or news articles out there are saying so. That is information that I, as the human, want or need and it's up to ME to do with it what I want.

Allowing the AI or the corporation to select what information it allows me to see is censorship and social engineering. Give me the information I requested, not only the information you want me to see.

Like I said, I have never used Copilot so I may be talking out my rear.
 
The problem is that in actual fact, there is no controversy around the physical assassination attempt on Trump. Someone tried to put a bullet in his head.

I completely understand that TDS is a real thing, and fully 1/3 of democrats believe it was staged - and I'd add that these are people who are grossly ignorant of shooting and ballistics. We might just as well say, based on the number of flerfers, that the idea of the earth being a globe is controversial.

AI, in its current iteration, isn't actually intelligent, it's just a fast aggregation algorithm of google/bing search queries.

Actual AI can do what humans do (even if we do it poorly) if it can:
  • observe
  • hypothesize
  • test hypothesis
Iterate hypothesize and test ad infinitum until something is discovered to be true or all hypotheses are exhausted.

In its current iteration, AI works pretty well on quantifiable and finite sets. Actual discovery and truth still remain the domain of humans.
I would agree that there is no controversy around the fact that it happened. However, I would also urge you to consider the difference in how humans parse information from an AI model. There is no controversy from the logical viewpoint of the intelligent, thinking human.

Humans, as you say, can observe, hypothesize and use logic to test conclusions. We are able to think. This process makes the facts you stated about the assassination objectively true. Beyond reproach.

However, AI models do not do those things, at least, not in the same way.

A human can look at information and discern fact from fiction. Not well, but we can. We are able to apply a quite complex series of logical leaps to form intuition. In the 'sky' example for instance, we can go and see with our own eyes. With a news report, we attach more or less weight based on the source based on our own internal biases. We can judge not just information presented, but reliability, and assign internal weightings to a source based on type (see it with your own eyes vs a retweeted FB post for example)

An AI sees things differently. Firstly, it has to deal with WAY more information than we do. If you have all the knowledge on the internet at hand when making a decision, then every possible conclusion is supported by sources. Because every possible conclusion has been published, at least once, somewhere.

It happened, it didn't, it's all a hoax, the chinese were behind it, aliens did it, etc etc. I guarantee that all those 'statements' have been posted hundreds of thousands of times in the past few days.

When that's the case, you can't do what humans do, which is conclude, 'well, all the sources of information I've seen agree, therefore it's true.' Because with the sheer volumes of sources you 'see' as an AI, that's not the case.

You also can't do the other thing that humans do and say 'I trust this source and it says so, so it's true'. That requires us to assign a personal bias to the reliability of the source, something that AI models tend not to do by design. This is for a few reasons.

1. To assign weightings to a source requires a human to judge reliability. There are too many sources, and too much being published every second of every day for that to be possible.
2. Having a human assign bias weightings INTRODUCES bias in the model. This is evident. I don't want some 23 year old junior software engineer in CA deciding how reliable statements that Fox News, or Donald Trump make are, and then telling an AI model to weight those sources less strongly than CNN when giving me information. I probably wouldn't consider their bias weightings even remotely accurate and the model is ultimately going to serve all of humanity...

As such, AI models tend to rely on number of data points to assess reliability. You can see the issue with that for a situation where, as you say, a lot of people, especially those creating content online, have a very different version of events than reality. If they're creating the content, they provide more data points, so they get a higher weighting. Even if they're spewing complete crap.

Finally, an AI is not able to use logic and intuition to say 'This I know to be true, that I think to be true, this seems unreliable, I'll only present what I know to be fact'. That's a higher order skill based on fairly complex logic and reasoning, and AI models can't do it, excepting perhaps by choosing to only present information that hits x% consensus in overall data points, or x number of total data points. That then loops back to what is the overall weight of data being created, not 'is that data true'.

In the end, the AI model is as you say, a collation of google/bing results. You know as well as I do just how much bollocks is contained in those pages. But once the AI has collated it and served it up as 'fact', you lose the context on where the info originally came from, which makes it harder for you, the human end user, to discern fact from fiction. It assigns a veneer of authenticity, even if the original source for the info presented was some crazy left wing nutjob on Reddit who you'd dismiss out of hand otherwise.
 
I know nothing about copilot but isn't it just an AI source to gather information for me? Maybe I WANT to know what the current BS is that is being reported regarding the assassination attempt. If it told me that it was a hoax I would be able to determine that the current majority of posts or news articles out there are saying so. That is information that I, as the human, want or need and it's up to ME to do with it what I want.

Allowing the AI or the corporation to select what information it allows me to see is censorship and social engineering. Give me the information I requested, not only the information you want me to see.

Like I said, I have never used Copilot so I may be talking out my rear.
That's a valid, and rather deep question!

I guess it comes down to the basic principles of freedom of speech vs journalistic integrity as it applies to the internet space. Is it moral or legally acceptable to knowingly propagate falsehoods in the de facto town square of the 21st century that may be damaging to people or causes in the real world. A tricky one...

Whatever your personal viewpoint on that is, from a company point of view, I totally get why Microsoft might be pretty risk averse in letting a tool which they cannot really predict in real time comment on events based on what they know to be unreliable data sets. Especially whilst acting as effectively a public face for their business.

If it does say something really damaging and they let it, then the lawsuits alone would be catastrophic, let alone the reputational damage and loss of consumer confidence. Can you imagine the field day the Chinese government, not to mention every news outlet on the planet would have if it said that 'it was a Chinese planned assassination attempt'? Or the backlash from over half the US population if it came out with 'It's just a hoax planned by Trump."

It'd make the Bud Light boycott look trivial by comparison.
 

Forum statistics

Threads
55,854
Messages
1,189,689
Members
97,484
Latest member
Hefty Jefe
 

 

 

Latest profile posts

gunslinger1971 wrote on Gray Fox's profile.
Do you still have the Browning 1895 and do you want to sell it? I'm might be interested. If so please let me know and do you have any pictures?

Steve in Missouri
[redacted]
Redfishga1 wrote on gearguywb's profile.
I would be interested in the ruger if the other guy is not.
Bartbux wrote on franzfmdavis's profile.
Btw…this was Kuche….had a great time.
Sorry to see your troubles on pricing.

Happy to call you and talk about experience…I’m also a Minnesota guy.
 
Top