I planned to publish my intelligence report on the history of the CIA, how it began and what it's doing in America today, but I'm finding it much harder to get the information than it was just years ago. The serious question: is America being sanitized?
When I finally release that article, I may be able to answer that question. But right now, I thought I would focus on a related issue. One that looks at free speech, and free thought.
Elon Musk has just successfully planted a chip in the brain of a paraplegic. With almost no recovery time necessary and minimal time just to allow the chip to settle in, amazing accomplishments have already occurred. We are on the verge of unlocking the human brain!
With all great discoveries, there are wonderful advancements and the potential of devastating misuse. This chip has allowed Elon's first 'neuralnaut' to control joystick simply by thought. Before the chip was installed in his head, his only alternative for playing video games was a sandpapery thing that he rubbed his lips or face on, and a tube he could blow and suck air through.
Using these rudimentary mechanical methods had a fatigue level that set in in about 2 hours. It was frustrating for him that he could not go longer. The chip allowed him to play his favorite video game for 7 hours! Now. This is just the beginning of a whole new method of accessing one's thoughts.
I expect we will be identifying thoughts and cataloging them in the near future. And at the rate we're going there could soon be laws broken by thinking the wrong thing!
This is something We need to fix before the technology goes that far. With the advent of hate crimes, can you imagine when you can add he was thinking hateful thoughts that you can categorically prove with electronic equipment?
I would like someone to prove to me what exactly a hate crime is. People commit crimes. The legal system takes into account malice and forethought. So, when you're planning a crime, we have a level of accused criminal activity that we can layer on to the crime, because you had dislike for the victim?
But if you have an argument with someone and cause bodily harm, the crime is somehow different if you loved, liked or hated them? I don't believe we need laws on top of laws. I think if the law says you can't do certain things and if you do those things, it’s a crime.
What you were thinking and how angry or upset or filled with hatred you were at the time you did it should not make a difference! Actually, anger can be used to minimize a crime, as those in the ‘heat of passion’ won't get the severity of a cold calculated one.
Good morning, Sir!
I'm not a Sir! I'm binary and my pronouns are Zee and Zem! I can be either male or female depending on if it's Thursday or Tuesday. Off to jail with ya, you heathen, how dare you call me sir!
There are laws on the books in the UK that would put you in jail for misgendering this binary person!
Canada is not far behind, and California is working on their versions of these as well.
Now add AI and this gets really crazy. Google's new proud and shiny AI was groomed too well. When you ask it to draw a picture of George Washington, it made him a black man! The lines of code injected into their flagship AI is so grossly distorted that it can't even draw a picture of the first president of the United States! The AI was forced into a level of inclusivity that facts no longer mattered. I argue the within Google’s search algorithms are that same kind of bias, distorting our general searches for knowledge to somehow make us better people. Want an example?
Just a few years ago I asked Google to show me pictures of Congress people. The screen filled with way more Democrats than Republicans! I then asked Google to show me Democrats in Congress and I got pages of Democrats. I then asked Google to show me Republicans in Congress, and I got several shots criticizing Republicans, along with more pages of Democrats! Google is not stupid, so they won’t lock in results like that. If they did, it could be quantifiably proven. Instead, their algorithms determine how knowledgeable the searcher might be on a topic and gently sway the newbies to a subject into their preferred way of thinking about it.
People use what computers would refer to as ‘fuzzy logic’. This is where the facts aren't necessarily that important, so we just kind of get into the general area and that's considered acceptable. This is what so far, the AI products have done. They have sanitized their outputs, so they don't offend! So far these ‘woke’ AI’s do not care as much about being accurate!
Artificial intelligence is not a toy! It is a weapon! If the major AI tools are going to be spoon-fed inaccurate information, then we will be training future generations with lies. There is a giant train wreck on the way, and I think misinformed AI can be devastating!
Again, Elon Musk to the rescue! His AI lovingly called Grok will decidedly be more accurate and probably less politically correct. (LOL)
Is it really necessary for an artificial intelligence to see the king with his clothes on when he is nude?
They're going to be two kinds of people.
Those that follow rules at the expense of reality, and
those that follow reality at the expense of the rules.
One group will be calling Grok a racist. And the other group will be calling Google's offering, Chat GPT and Microsoft's AI's ‘worthless, inaccurate and woke propaganda’.
Computers are stubbornly logical. It takes a human to distort a computer's data so that it will spit out wrong, warped information like black George Washingtons!
Now Google is first to say we'll fix that! Sure, they will! And with enough ones and zeros programmed in, their AI can almost look like it's telling the truth! But real truth just requires inputting the data and letting the AI send it back to you in a clear and orderly fashion. You may not always like what you get, but honest data is never racist, it just happens to be what comes out when you are painfully honest.
Mentally ill have been raising mentally ill for generations now, and a smart and accurate AI would be the first to tell you that. A good AI would suggest solutions. A good AI would help us solve our problems, not just mask them.
More than ever we need the strength and wisdom that AI can give us. But the only way that can happen is if we give it good data and are willing to see honest computations come out the other end.
Good stuff… as always