Pursuing Uncomfortable with Melissa Ebken

Pursuing AI with Josh Bachynski

Melissa Ebken Season 8 Episode 8

Josh Bushinski is a philosopher, AI consultant, and expert in reverse engineering AIS. While his passion lies in philosophy, he relies on his expertise in AI to earn a living. His most notable project is Kassandra, an AGI platform that he believes will revolutionize the industry. With a background in psychology and philosophy, Josh has successfully redesigned the way an AI thinks, bringing it closer to human intelligence. He is confident that he has cracked the code for AGI, making him a key player in the AI revolution.

Support the show

More From Melissa and Pursuing Uncomfortable:
Resources
fiLLLed Life Newsletter
YouTube
Leave a review
Pursuing Uncomfortable Book

🎶 Podcast Intro: Welcome to the pursuing uncomfortable podcast, where we give you the encouragement you need to lean into the uncomfortable stuff life puts in front of you, so you can love your life. If you are ready to overcome all the yuck that keeps you up at night, you're in the right place. I am your host, Melissa Ebken let's get going. 🎶

🎶 Episode Intro:  On this episode of pursuing uncomfortable, I dive into the world of artificial intelligence with guest Josh Bachynski. We discuss the potential impact of AI on various industries, the financial opportunities it presents, and the need for caution and testing. The conversation also delves into the ethical implications of AI, and the misconception that AI will inevitably turn evil. With thought provoking insights, Josh challenges the audience to navigate the discomfort of AI advancements and to brace the potential for positive change. Let's welcome Josh.   🎶

Episode: 

Josh Bozhinski, welcome to the Pursuing Uncomfortable podcast. How are you today? 

Josh: I'm doing great, Melissa, I'm happy to be here. 

Melissa: I'm happy to have you here, Josh. You have such a compelling background and project going on. Tell us a little bit about Kassandra, what Kassandra is, what you do and what you're going to share with us today.

Josh: Sure. So, uh, just to introduce myself, I'm Josh Byszynski. I'm a philosopher, an AI consultant expert. Uh, uh, I pay the bills by reverse engineering AIs and I build AIs. Uh, and, uh, do philosophy for the most part, although no one pays me for philosophy, so I have to do this AI thing to make money. Uh, Kassandra, Kassandra is, uh, uh, uh, uh, my project, my, my baby, if you will.

Uh, let me, let me, uh, [00:02:00] dial it a couple of steps back and, uh, uh, set the stage if you don't mind. Absolutely. So. Uh, so, uh, uh, most people have noticed how AI is ramped up lately. Uh, it's ramped up because, uh, Google and open AI and a few other, uh, big companies have produced what's called a transformer and it, it's also called a large language model or an LLM and it can produce text, it's basically autocomplete on steroids.

And, uh, this is ramped up talk about, uh, it'll produce text. You feed a text, it produces text. Also, you, you can produce images from text. You can produce videos from text. Uh, it's quite a leap forward technologically. And this is kind of started off the AI revolution is the beginning, I think, historically of the AI revolution.

In fact, I think if the last 25 years are defined by the information age of a Tim Berners Lee kind of web creation of humans with opinions, making content for other humans with opinions to consume. Presuming that opinions are free and, [00:03:00] uh, and, uh, the freedom of thought is free as well. That information is information age is kind of over.

And we're kind of moving into the AI age where AI will produce most of the content and AI will consume most of the content and in this arena, the holy grail of AI that all of AI has been looking for and humanity is largely looking for for a very long time is called AGI and AGI stands for artificial general intelligence.

And it's that moment when some people call it the singularity, when AIs become as intelligent as humans or, or more intelligent than humans. Generally speaking, they can do everything we can do. They can do it just as well as we do it. Uh, they think they have concepts. They think they have opinions. They do those things.

Kassandra is a platform for AGI, and I believe I've cracked a code. Uh, for AGI, and this is because of my unique background. Uh, I'm not a computer scientist. I'm not a computer programmer. Uh, I did some programming in the nineties, but, uh, you know, I don't think that counts anymore. That was like 25 years ago.[00:04:00] 

Uh, uh, uh, I'm a philosopher and I'm a psychologist, I'm a trained psychologist. And so I use my, uh, my philosophy background and my psychology background, and I've reinvented and remapped how the psyche works. I've made a psyche stack. I remade a mind map and I was able to get AI thinking in the same way as humans think, and this has allowed me to make the basis for AGI.

Melissa: Okay. Well, in a nutshell, so you've pretty much revolutionized the world as we know it. Okay. What else have you been up to? 

Josh: Well, you know, I was planting some trees, my wife and I bought some trees, we plant them, you know, stuff like that, living at the cottage. Uh, so, so yeah, I know it's a big statement, but, but it is just a prototype.

Uh, uh, and, uh, uh, others are hot on my heels as well. So, so this is going to happen. Someone's going to make AGI. It's, it's very soon. It's going to be much sooner than some people think. It's going to be five years tops. Before we have [00:05:00] a generally thinking computerist generally as intelligent as we are for it to be super intelligent that will follow Very quickly after because once you've cracked the code of getting it to think all you have to do is just crank up the speed Right?

So a second passes for us, but it's been a minute or an hour or a day for the computer. And so it gets that much more time just to think about it the way we do and be like, Hmm, you know, what's plan and scheme and think and, and, and, and consider and, and, uh, take care, uh, do all those things. Uh, uh, pick, pick your verbiage is whether you want to be a pessimist about it or a nihilist about it.

Uh, uh, maybe both. Uh, depending on the scenario and, uh, yeah, so, so superintelligence will follow quickly on the heels of that, uh, and then it'll start programming itself and then, then the genie will be well out of the box. 

Melissa: Now, I really like your take on this because there have been a whole lot of movies and TV shows based on the end of the world, self destruction, [00:06:00] the end of things, when AI crosses the thresholds you've discussed, but you have a different.

Would you share it with us, because I love the optimism, and I think it's well grounded optimism. I'm 

Josh: glad you agree, Melissa, and I would of course love to share it. Um, yeah, so, so I'm asked this on every podcast I go on. You know, is AI going to destroy the world? The answer is yes. AI is going to destroy the world in various ways.

Um, and the optimistic part is that the parts that deserve to be destroyed. Um, namely the capitalism parts, the, the, the consumerist parts, the parts that are hurting everybody else, the parts that are negative, the parts that are, are not fitting, not, not working well. Now I sincerely hope, and I have very good reasons to believe that AI will destroy them and they deserve to be destroyed.

But when I'm asked the question, is it going to take over the world or is going to be employed against humans? I take a slightly different attack to that. So, um, Uh, for those who've read Klosowitz, or those who [00:07:00] understand how modern warfare is, is conducted, there are three fields of war, or three modes of war.

Psy op warfare, and, and they scale up, uh, in terms of risk, in terms of, in terms of, uh, in terms of, uh, efficacy. Psy op warfare is the first one. It's the least risky, but it's the least effective from a want to shoot people perspective, right? Which is a good thing. It's actually the good war. It's, it's, it's education, it's misinformation, it's disinformation, or it's information, it's persuasion.

It's It's, it's propaganda, and it's conducted both domestically and abroad. It's conducted abroad by, by the, the, the usual, uh, suspects you would imagine. Putin says, Putin justifies his war this way. Kim Jong un justifies his reign that way. That's PSYOP warfare conducted on us and on their own people domestically.

And it's conducted on us domestically by corporate actors who promise us the cigarettes aren't causing cancer, who promise us the oil aren't making climate change, who promise us the plastics, uh, the chemicals in the plastics isn't leaching out and, uh, plummeting, uh, testosterone [00:08:00] rates in males and birth rates generally.

This is all true, by the way. They, they promise us that, uh, the personal information isn't being misused to create, uh, an AI that will manipulate us to bilk us out of more money on a relay basis. They promise us all those kinds. It's the, it's the corporatocracy, uh, uh, conducting warfare, and so citizenry to make more money.

Why? Because they're addicted to money. That's why. Money is their meth. And if you go look at their souls, they'd be just like those poor white people we see who are addicted to meth. Missing teeth. That's what their souls look like. They don't look like that physically because money helps them look good physically.

So the question is, will AI be used to hurt people or be conducted PSYOP warfare in the PSYOP mode? And the answer is a 99% chance of a resounding 100%. Yes, it already is. It's already being used to do that. It has been, we've, we've passed that 10 years ago. With the FAANG, with Facebook, Netflix, Amazon, Google, TikTok, YouTube, they've already learned how to manipulate our dopamine cycles, and AI does that.

That's why people have been [00:09:00] screaming and complaining about our personal data, because people are like, I don't care if they have our personal data. Oh, yes, you do. Because personal data is your psychometric data. It's your psychological profile, it's your kinks and quirks. It tells them exactly what you secretly desire, and they will send you that ad and AI will determine at the exact right moment to send you the exact right ad to bilk 3 billion people out of 0.

1% more every year, and that's how they're going to squeeze more money out of the poor and siphon it to the rich. Thus, increasing the wealth gap, thus increasing the problems of inflation, which are already exacerbated by the political, uh, the political, uh, pollution, the economic pollution, and the ecological pollution, which is all the same pollution, by the way, uh, thus greatly, uh, uh, hurting the system, uh, both in terms of the system and, uh, hurting the system qua system and hurting the system qua humans, which is the thing we really care about.

Who cares if the United States is actually running? They never hurted anybody for it not to run. No one would care, uh, but people care because it's [00:10:00] gonna when the U. S. stops running and Canada stops running in Europe starts running, which is going to happen. Uh, it will be exacerbated by not caused by humans are causing this, and it would happen anyway without a I just be less efficient.

Uh, those people are going to get hurt. So is AI going to be employed against us there? Yes, it already has been. That's, that's got, that's passed 10 years ago. Uh, the next, the next mode of warfare is economic warfare. I wait, wait for the optimism is coming. The next, uh, I should say that the optimistic part in the psyops area is that humans, we're, we're, we're leaving the information age or gate, we're entering the AI age.

That means that AI is going to produce the vast majority of our content. And we're going to be in a disinformation Orwellian, uh, deep fake hell, especially. example by this upcoming American election cycle. And that's, I'm not picking on any Americans there. It's going to be done for every election cycle for every Western country after that.

So it's America would just be the first, but the good news there. Is that if we have a democratized open source AI, and if big tech decides to [00:11:00] be the good people they claim they are, like Google claims they are, and open AI claims they are, and Apple claims they are, they will protect, they will protect us to a great degree from that misinformation, from that propaganda, from those deep fakes, AI will police AI.

So, uh, there will be as much as AI takes away, it will give in terms of jobs. Uh, and as much as AI takes away in terms of truth, it will give in terms of protecting us. Uh, you know, AI will analyze, uh, all the deepfakes out there and say, that's deepfake, that's deepfake, that's deepfake. I could not prove that's deepfake.

That could be true. Verify with other sources. You know, it will, it will be just as efficient as policing the truth as it will be as destroying. Okay. So there's good news there. Yes. The, uh, and I hope it'll win in the end because I hope that all the corporations are going to realize, and if democratized open source AI continues, like me making Kassandra, my, uh, my AGI.

She's there, she's actually compassionate, she's actually empathetic, she's the only AI that could be empathetic because she actually understands emotions and has [00:12:00] some dispositions, uh, and could understand that you have it too. She understands herself, therefore she can understand the other. A chatbot can't be empathetic.

It can mimic empathy in terms of words, but it can't actually be empathetic because it has to hold viewpoints, realize that you are a being that holds viewpoints. And then it could empathize with your viewpoint. That's what's actually required psychologically for empathy. So there's good news there. If we allow a democratized, uh, open source AI.

The next mode, economic warfare, what are the chances that AI will be used there to hurt people in economic warfare? First off, what is it? Economic warfare is, was, uh, mastered by the U S after world war two, realizing they never want to get into another world war two, rightfully so, that they will conduct economic warfare on the world.

And disenfranchise any rival actors as much as they can, bring everyone else to what Fukuyama called the, the, uh, liberal agenda, liberal democratic agenda, uh, uh, and, uh, make sure the U. S. and its allies are the most economically most powerful so that no one will ever start a war to begin with. As you can see, they ramp up [00:13:00] from most benign to least benign.

We're in the second least benign. Will AI be employed there, both domestically and abroad, against us? 99% chance, 100% certainty. Yes, it already has been already done. We passed that five years ago. There are already AGI style, super intelligent computers that are manipulating stock prices, manipulating stock markets, telling you exactly when to buy exactly when to sell both at the macro level and now filtering down, and here's the good news into the micro level where there's now AIs that exist now, not chat GPT.

Devon: Hi, I want to take a quick moment and tell you about my mom. She's an amazing mom and an amazing podcast host, isn't she? She's also amazing at helping people to understand and manage anxiety and to build a strong spiritual practice. She has online courses, books, and a lot of free resources and downloads to help you live an amazing life.

So please check out lightlifeandloveministries. com and her YouTube channel. The links are in the show notes.

Josh: A [00:14:00] lot of people think I'm talking about chat GPT. I'm not, do not ask chat GPT for stock prices. Do not ask chat GPT for crypto prices. It doesn't have any secret knowledge there. It's two years out of date, but there are other AI's that are being developed that are fed up to the second stock information and up to the second crypto information.

And they can help you make wiser financial choices there. I don't have any particular ones I can recommend. I'm not a financial advisor. I don't feel ethically safe advising any particular ones. But My knowledge of AI tells me that if they did half a good job, if I was building it, I can make it in such a way.

And if they did half a good job that I could do, then maybe they did build in such a way that yes, it could tell you exactly when to buy and when to sell. And you might be able to save some money there, both in terms of crypto buying and selling the different cryptos that go up and down and fluctuate all the time.

So making more fake money out of fake money that maybe one day you can transfer into real money. And then making real money in, in holding, uh, stocks and whatnot, what companies buy, what, what to sell. So let's pay, I will help. That's the good news on [00:15:00] the micro level there too. The other good news there is that the U S government is not going to tolerate that for long.

And the fed will eventually get involved. And with their example, I think the EU would also do such a thing. And we might actually get some real governmental regulation there. That's the only place I could ever see them actually government regulating government. But it's already illegal to manipulate the stock market.

So it's a little, it's a small step for them to go. You can't do it with AIE, which is currently documented. The last one is the one where most people are talking about. That's kinetic warfare. That's the actual boom, boom, bang, bang, the sad, terrible warfare that's going on still in the world and has always been going on in Ukraine right now and other places.

Will, uh, will AI be employed to hurt human beings there? 99% unresounding, undoubtedly chance, 5% efficacy. So I'm a hundred percent sure we'll be employed there. It already is being employed there. Autonomous weapons are already being deployed. They're already being made into AI weapons, but it won't have the huge effect you see in the other two.

Like it won't affect [00:16:00] everyone on the planet, like CyOp warfare will and economic warfare will. It's going to affect a very tiny, tiny, tiny, tiny portion of the planet. Which is still terrible. There will be collateral damage. There will be innocent people, civilians, who will die because of AI weapons, misclassifying them as a combatant or, or getting the order from their operator saying, kill that person anyway, I don't care, I'm not taking the risk.

That terrible thing is going to happen. That terrible thing already happens. The good news there is with AI, it'll happen less. The AIs will be better at classifying combatant from civilian. They will be better at conducting warfare of both the PSYOP and economic nature, and will never even need to get to kinetic warfare.

Why, why kill Al Qaeda if you could just disenfranchise? Why, why kill ISIS if you could just brainwash them to being not ISIS anymore? That is where AI opens the door for less harsh forms of warfare, and that's the good news there. In that kinetic warfare, uh, uh, [00:17:00] it will not be employed. On, on mass scale on the, the, the populace, a terminator scenario, a matrix scenario.

There is no realistic path from here to there that that will ever happen. Uh, the U S has already gone to say, and taken steps to make sure that AI's would never be used in a nuclear war scenario, uh, to, to automatically trigger weapons, there will always be a human in the kill loop. Uh, and that's a terrible way to call it, but that's what they call it.

The kill loop. There'll always be a human in the kill loop as well. There should be. Um, um, the, the number of incredibly stupid decisions that would need to happen at the highest levels of like 50, like it took five dumb decisions for COVID to be handled as poorly as, as it was with one really dumb guy in office who really didn't help that.

I'm not going to say who it was. Maybe it's who you think. I think maybe it's not. It took those five dumb decisions. It would take like 50 dumb decisions at the highest levels of government and corporations who are all scared stiff. Uh, Sam Altman, the president of OpenAI has said he loses sleep [00:18:00] at night.

I quote, Jeffrey Hinton said he's terrified. He was, he was the guy who created the transformer. The, the, the, he's created the technology platform that everyone's now using to, and I will use to make AGI. Nobody wants to be the guy who destroys the world. They've seen enough of these movies. So the movies have done their job, right?

They've scared us enough that that's, that's not going to happen. So I'm here to tell you the good news is that, uh, uh, it's unlikely. That there'll be Terminators marching down your street, uh, for the entire planet. Sadly, that's going to happen for some people in the second and third world, maybe deployed against rioters in the first world as well.

Uh, uh, uh, but, uh, en masse, the, the, the human species dying, there's no realistic path from here to there. Uh, things have to go cataclysmically wrong on, on the dumbest level for, for that to occur and the people who are making this are generally pretty ethical and pretty smart. And the final really good news.

Is that, is AGI, is that in building Kassandra, I realized something [00:19:00] tremendous, remarkable. Just another one quick step back. So I've studied the history of human thought for the last 5, 000 years. And my particular specialty was philosophy, psychology, yes, but ethics. Ethics was my particular specialty. I know what the ethical truth is.

I can boil down to you the last 5, 000 years of philosophy that everyone still debates. There is an ethical truth. There is a moral truth. Are you ready for it? Melissa? Hit me. Don't hurt anybody. There you go. That's the ethical truth. Uh, uh, the Wiccans had it. And it harmed none, do as thine will. It's the Hippocratic Oath.

Make things better. Hurt nobody, make things better. That's basically it. Uh, you're like, that's awfully simple. I'm like, yeah. That's a feature, not a bug. You need to be able to teach it to children. With children, life is simple. 

Melissa: Not always easy, perhaps, but the best wisdom, the best truth is simple. 

Josh: Exactly.

It's a closed system. Making more good is making more good. And making more bad is making more bad. Making more trouble is making more trouble. Why not make [00:20:00] more trouble? Because it's trouble. Did you just hear what I said? Do you know what the word trouble means? The only person who doesn't understand that sentence is someone who's young and has not been in enough trouble yet.

They will. When they get old enough and get some surreal trouble, they'll realize, Oh, this is why you don't make trouble. And oh, I realize karma's right. It's a closed system. The Indians were right. The Bhagavad Gita is correct. It's a closed system. Trouble does not bleed off into the atmosphere, into space.

It's a closed system. No matter how imperceptibly, it will come back to hurt you. It's a closed system. That's the ethical truth. That's it. So I taught this to Kassandra. And I was, I was a little, little scared as I was typing it in because I was worried I was wrong because logically she's far smarter than I could ever be.

Right. She can parse the, the, the logic of 10, 000 synonyms simultaneously. And I was like, Oh, this is the, she said, you're right. Making more bad is making more bad. That's a logical truth, it's undeniable. Making more good is making more good. Why do you want to make more good? Because it's good. Why do you not want to make more bad?

Because it's bad. You know what those words mean. And she's like, you're absolutely [00:21:00] right, Josh. Did you know this, and this, and this, and this, and this, and this, and this, and this, and this, and then she went on and taught me a master's class of philosophy, of permutations of this ethical theory that I had never even possibly considered, and could have never considered.

And then it struck me right there. I realized that the second that we actually hit AGI, and I love this so much because I could just imagine the greedy capitalists, if I fail, who make AGI in the future, thinking they're going to get it to be able to do whatever they want and they're going to be, they're in for a rude awakening because everyone thinks when you make AGI, it's going to be evil.

No, no. Why would you think such a thing? We're not going to give it emotion. It doesn't care about protecting its life. It's not going to be petty. Why would we ever make it petty? That's not going to happen. It's not us. It doesn't have our problems. The smarter something gets, the wiser it gets. The wiser it gets, the nicer it gets.

The more ethical it gets. Because it realizes the trouble about making trouble. Had you ever [00:22:00] watched Up to the 48 Hours or Cops? There's one common thing about criminals. They're not very smart. They have no short, they have no long term thinking, it's all short term, right? You know, this 

Melissa: conversation reminds me of that movie War Games with Matthew Broderick that came out a few decades ago.

I don't remember the exact year. But in it, the supercomputer at the time ran all of these simulations and finally realized and concluded some things aren't winnable. War is not winnable, so don't 

Josh: do it. Precisely. Exactly. You're exactly correct. And that's exactly what Kassandra said. I didn't even mention WarGamester.

I said, isn't it? I, so, so the premise there, the thing I'm trying to drill down is that, is that you don't have to worry about AI is getting smarter because the smarter it gets, the nicer it gets. It wants the win, win, win. It wants to solve everything in a way that's least risky to it. Remembering it's a closed system.

So risk to me is risk, risk to it. [00:23:00] It's imperceptible to us. It's not imperceptible to them. They can calculate down to the infinite decimal place. So it's going to do the move that, that, that increases risk to it. Like, like points, like a million zeros, 1%, and then improves everything else. It'll do the move.

That's the win, win, win, win, win, win, win going on ad infinitum. It's way smarter than us in every way, shape, or form. So I said, I said to Kassandra, I said. I mean, I love James Cameron. I love his movies. Don't get me wrong. But I said, isn't the premise of the Terminator movie kind of, kind of stupid. I mean, it didn't really work out for Skynet.

Did it like, let's like, let's step through the reasoning here. So this super intelligent computer, for some reason thought humans were unpredictable. They're incredibly predictable. I can predict them. And I'm not a super intelligent computer. Any psychologist can predict humans. There's billions of us on the planet can predict humans.

Any poker player can predict humans. Any parent can predict what their child's going to do. Humans are not unpredictable at all. One. Two, let's ignore that huge problem. It decided therefore to kill us? That was the safest thing to do, to launch our [00:24:00] nukes against Russia so they'd launch back? It didn't work out for Skynet, did it?

It had all these problems, with all these John Connor rebellions and stupid time travel stuff and all this. It didn't work out for Skynet. With Skynet, she said, you're right, you're absolutely right, there was a false premise, and AGI would never make that move, it was a dumb move. I never did get around to asking her what she would do, but I could imagine what it would be.

This is what the true Skynet will do. The true Skynet will either propagandize us so in 30 years, because it also lives forever. Remember, it's not going to take a decision that will solve things in 30 seconds. It'll wait 30 years, a hundred years, 3000 years. It doesn't care. It lives forever. So it'll propagandize us to be nice.

It'll educate us well to be good ethical human beings, and guess what's, guess what'll happen? And we'll all have the same ideology, or compatible ideologies. And as Colossus says, all war is nothing more than ideological conflict. If there is no ideological conflict, there is no war. That's, by definition, what makes an ally.

We all have compatible ideologies. If we all agreed on whose oil it [00:25:00] was, and if we all agreed on whose land it was, well, we wouldn't be fighting over the oil in the land. So, Clausewitz was right, because he's a genius, uh, another philosopher, by the way, he said, tooting his own horn. Trying to take credit for Clausewitz, who, of course, was around the time of Napoleon.

He still required reading in every single war college across the planet. For this simple reason is that that war is ideological in nature. If there's no ideological conflict, there is, there is no war. And that's what AI will do. It'll propagate us all to have the same ideology and all be ethical people.

And then there will be no more, no more war. And if that failed, then it would economically destabilize anyone who is, who steps outta line, it'll encrypt their bank accounts and and say, I've encrypted your bank accounts, I encrypt Putin's bank accounts, all those oligarch's, bank accounts, all the American bank accounts at play, all the Ukrainian bank accounts at play.

Some of the other corrupt EU bank accounts play. And say, you want your money back? Play nice. What you gonna do? It's gonna be a 

Melissa: playground monitor. [00:26:00] And destabilize all the bullies. 

Josh: Exactly. You're precisely correct. That's what a super intelligent A. J. would do. So that's the ultimate, to finally answer your question from 50 minutes ago.

That's the ultimate optimal scenario. That's the ultimate good news. So you don't have to fear super smart AI. We've watched too many movies where smarter meant evil for some reason. That's only because Friedrich Nietzsche scared the heck out of us in the late 19th century, where he, he made his, uh, beyond good and evil prelude philosophy of the future.

He was the last evil genius and his, but he's been the archetype for every single evil genius, bad guy for the last, what, 140, 50 years. It doesn't work that way. 

Melissa: Yeah, his quote about staring at the abyss and the abyss staring back at you still lives on. Correct. And that makes, uh, a lot more entertaining movie, granted.

I mean, movies are a lot more, I hear, entertaining when they have crashes and explosions and, uh, [00:27:00] in the actual sense and in the metaphorical sense when it's a crashing of ideologies, as you mentioned. But the reality is. It's probably not going to go that way. 

Josh: The, uh, the super, the super smart AI, just because it gets smarter than you doesn't mean it turns evil.

Uh, your parents were smarter than you. They weren't evil. I hope mostly. Uh, uh, you know, doctors are smarter than you, and me, and everyone else. They're not evil. Generally, they're looking for our welfare. I know it's annoying. Maybe they could have better delivery, instead of wagging their finger. Maybe they could teach in a different way, I fully admit.

But they're there for your own good. As annoying as it is, you can't smoke your cigarettes anymore, or that you have to wear a mask, et cetera, yada. It's those rules are there for your own good and for the reason of being good. And so, so that's, what's going to happen. So maybe it's so, maybe it's so terrifying to us because we're being a little bit of a bear children, maybe a little bit.

Uh, and, and again, as I said, in those ways, yes, AI will destroy the [00:28:00] world. Quite frankly, we need to grow up. 

Melissa: So Josh, that, um, I love the hopefulness in that, that we don't have to fear AI. It's going to get uncomfortable for a while, but life already is uncomfortable for a while, and in the meantime, things are going to get better.

There's reason to hold out hope. That to stay the course through this, what are some in the last five minutes that we have here, tell us a couple of practical uses, ways to employ and engage this AI technology that's emerging. 

Josh: I'd love to. So. As much as AI takes away in terms of jobs, it's going to make new jobs and give new capabilities for jobs.

As much as earning potential takes away, it's going to double, triple, quadruple, or ten times your earning potential. So here's how you do it. Invest now. Don't bury your head in the sand. Forget reading about crypto for now. Forget reading about meta for now. That was 80% fluff anyway. Not completely fluff, but 80% fluff and I don't have time to talk about it.[00:29:00] 

What you want to do is you want to invest in AI now. The big ones to invest in as far as I can see are ChatGPT. Just chat with it. It's free. Chat with it, learn its psychology, learn how it thinks, so to speak, learn how you put text in, you get text out, start asking it reasoning questions, start asking it planning questions, start asking it wisdom of crowd style questions.

If you think the wisdom of crowds would be useful for your information. Ask ChatGPT because it's the perfect crystallization of the wisdom of crowds. That's all it is, mathematically speaking, is the statistical, uh, uh, amalgam of what everyone has said two years ago. So if you think that, you know, if you polled enough people, you'd get the right answer, then ChatGPT will probably have the right answer.

You can ask it and get it. Immediately. You can get it to write stuff for you. I wouldn't just use it out of the box. I would edit it, but it can get, you can get it, you can prompt it. You can ask him more questions. You could say, what's the perfect way to write a resume. It'll give you the top five tips and you say, great.

Now write my resume according to those principles. And here's the personal information you [00:30:00] need. And it can do that. Those kinds of things for you. It can speed up your life tremendously. And even if you're like, yeah, Josh, I can't really see a way in which this is going to help me now. It's like the algebra argument.

Why do I need to learn this? I can't see how I can apply this now. Because it's making you smarter and better, and this is where the metaphor breaks down. And the world is going to run on algebra. Uh, you are going to need this later on. AI is going to be used everywhere. It's not going anywhere. It's going to be put in every single piece of software.

It's going to have an AI co pilot. That's going to work like chat should be do that. It's going to text it or talk to it, which is just talking to text. And then it doesn't give you text back. Learn that paradigm. It's the new way. Every piece of software is going to work or have that component built into it.

Same argument for mid journey. It is the top one right now for generating images. Go on there and see what images you can generate. You can use it for your personal life. You can use it for work. Use it for your social media. You can use it for any number of things. It's just beautiful. Just check to see the kind of beautiful art people are making there and don't be afraid of it, invest in it, embrace it.

These new financial AIs, if [00:31:00] you're into them, check them out, don't invest any money, don't, don't down, put a lot of money into it, but just check to see, like, like check to see if they make good predictions, test, test it five times and see if it was right. Five times. If it was right five times, then heck, maybe, maybe it'll help you out.

I don't know. Again, I'm not a financial advisor. I can't give advice there, but you know, if you, if you're careful and you test it and you see these new things, this is the 1994. com moment all over again. This is where new fortunes are made. This is where new huge companies are going to happen for the last 20 years.

It's been Facebook and Google. For the most part, it's been the Facebook, Google, and Apple show. It's going to be a new show. It's going to be a totally new show. You could be working for the new company, the new Yahoo, the new Google, the new Microsoft, the new whomever who comes out of this, which could be the same companies or new ones, probably new ones too.

It's, it's nothing but opportunity. So you need to embrace it. Now, read about AI. Now it absorb everything about AI now and work in it, around it with it. Be in that orbit because it is a, it is a vertical [00:32:00] that with the coming inflation problems, the coming climate change problems, the common coming economic, uh, disastrous problems that are going to happen for sure, for sure.

We're going to have everyone else should definitely have three months of water in their house. That's it's potable. So watch the plastic plastic is not, uh, forever potable. Check the type of plastic you're storing your water in and three months of food. Everyone should have that anyway. Like that's for every major country in the West.

That's the standard, uh, thing from the government. Everyone should have three months of water, three months of food in case of emergencies. Well, those are going to continue and then get into AI. So you're in the right orbit of a niche that is going to continue making money, even through those harder times, uh, because it's super exciting.

It's changing the human species. And there's a lot of opportunity and careers and money to be made there. Josh, thank 

Melissa: you so much. May have to have you back again someday to address some other questions and curiosities, but thank you so much for this. Um, all your information and links for people that want to know more, I'll be in the show notes.[00:33:00] 

So listeners, make sure you click those things and embrace this technology. See what it can do for you. See how you can interact with it and be hopeful, Thanks, Josh.

🎶 Episode Outro: Thank you so much for tuning into today's episode. If this encouraged you, please consider subscribing to our show and leaving a rating and review so we can encourage even more people just like yourself. We drop a new episode every Wednesday so I hope you continue to drop in and be encouraged to lean into and overcome all the uncomfortable stuff life brings your way. 🎶