AI in Research: Are Humans Eliminated?

Books & The Biz

Dan Paulson and Richard Veltre Rating 0 (0) (0)
Launched: Feb 27, 2024
dan@invisionbusinessdevelopment.com Season: 2 Episode: 11
Directories
Subscribe

Books & The Biz
AI in Research: Are Humans Eliminated?
Feb 27, 2024, Season 2, Episode 11
Dan Paulson and Richard Veltre
Episode Summary

Everyone has heard of ChatGPT. It can be a useful tool for research and writing.  But is it all it's cracked up to be? In Part 2 of our interview with Mark Stouse, he talks about how we are in the early stages of AI and some of its faults.

About Mark: Mark leads a team with the only AI-native platform that enables Go To Market (GTM) teams to plan, predict, prove, and pivot their investments in real time. With over 26 years of experience in marketing communications and strategy, he has a passion for transforming GTM performance with data-driven insights and agile decision making. In addition, Mark has been recognized as an innovator and leader in the analytics field, with multiple awards, patents, and publications. He is committed to advancing the practice and standards of GTM accountability and optimization across industries and markets.

SHARE EPISODE
SUBSCRIBE
Episode Chapters
Books & The Biz
AI in Research: Are Humans Eliminated?
Please wait...
00:00:00 |

Everyone has heard of ChatGPT. It can be a useful tool for research and writing.  But is it all it's cracked up to be? In Part 2 of our interview with Mark Stouse, he talks about how we are in the early stages of AI and some of its faults.

About Mark: Mark leads a team with the only AI-native platform that enables Go To Market (GTM) teams to plan, predict, prove, and pivot their investments in real time. With over 26 years of experience in marketing communications and strategy, he has a passion for transforming GTM performance with data-driven insights and agile decision making. In addition, Mark has been recognized as an innovator and leader in the analytics field, with multiple awards, patents, and publications. He is committed to advancing the practice and standards of GTM accountability and optimization across industries and markets.

[00:00:02.710] - Rich Veltre

Ai is... I'll give you my personal perspective. Ai is extremely exciting and extremely scary at the exact same time.

 

[00:00:11.150] - Mark Stouse

Yeah, I agree.

 

[00:00:12.750] - Rich Veltre

So I am trying to stay more on the excited side because I do see a lot of benefits. I think, though, for me, I'm not in the science, right? I'm not in the actual programming. I do a little bit more of the accounting systems, and the The background books, and the reporting, and that thing, that I'm sure is what's feeding the AI. So my early experience with it was mostly automation, automating the accounts payable, which really, to me, is just repetitive nature, learning. And then this is what we did 95 % of the time in the past. So we're going to do the same thing. And most of the time, it just takes keystrokes out. So instead of having a 16-person department, I can have a 10-person department. And get probably better results because the system is doing a lot of the work ahead, and the people aren't doing the grunt work. They're doing the smart work. So I guess where I'm going by throwing that all in there is, where is the line? I guess. For somebody like me, who wants to promote it, who wants to push it. And I've got everybody out there saying, here, buy our software because we use AI.

 

[00:01:34.340] - Rich Veltre

That means nothing to me.

 

[00:01:36.250] - Mark Stouse

I think you have to really look at this through some different lenses, right? So one of them is that we are in the most nascent stage. And if you find this exciting, get ready, right? It's a lot more than this, and you probably won't like a lot of it. I think also you have to look at it. I mean, you don't hear this discussion much outside of data science circles. But there's a qualitative AI, and there's a quantitative AI. So quantitative AI is a calculation, That's like proof, right? That is causality and things like this, right? It's machine learning in a sense. It's a bunch of different things like that that are more heavy weight. Then you've got qualitative AI, which is what everybody has experienced with ChatGPT. This is the LLM-based stuff, where What it's forecasting, really, is the next word. So it is totally based on patterns, on helping you crunch large amounts of data, words, into something that can be used, hopefully with accuracy, much faster than any human being could ever do it. So people look at it as, hey, it's time savings. I can be more productive. But then you've got problems with the accuracy of the output.

 

[00:03:50.500] - Mark Stouse

So you're still having to stay really engaged with the output. In my own time, I'm very much of a historian, like a real one, and haven't been that way for a long time. And I study technology in the pre-renaissance era And I contrast Northern Italy and Southern Germany. These were two really super hot zones for technology development during that period of time, but they did it very differently. So So when ChatGPT first came out, I decided to test it, and I gave it a bunch of prompts to write a concise research paper on something that was fairly esoteric, but there was a lot of stuff out there on the web about it, right? And so I wasn't starving the LLM at all, right? And it came back five minutes later with this absolutely gorgeous paper. I mean, it just was like you were just looking at it going, crap, right? And then I started reading it, and it had a lot of factual errors. But more troubling, actually, I started looking at the footnotes, The footnotes, there were some new footnotes that I had, and I'm pretty familiar with the material on this. I've never seen these before, so I was getting pretty excited, and I was going, Man, I got to get a hold of this stuff, right?

 

[00:05:45.000] - Mark Stouse

Turned out to be completely fictitious. The AI had literally made up the footnotes. So it was, I think, one of the And one of the things we have to acknowledge is that right now, at least, AI is very much an extension of the human mind and the human personality. And so, as we've seen recently with various charges of plagiarism and things like that, people do things they shouldn't do, and that is going to be reflected unsurprisingly in the AI. So I think that And to wrap this up a little bit, the idea that you can say, you can reduce your costs dramatically with AI, which is like the big thing among C-suites right now is partially true. Okay. But if we... So the current estimate, which I think is just ludicrous, is that over the next three years, we'll lose 300 million jobs globally to AI, right? Well, I don't think that's possible, but let's just say that it is. Let's just say that that number is real, and that three years from now, we're all sitting here talking to each other, and we're like, holy crap, right? 300 million, a lot of work, right?

 

[00:07:24.780] - Mark Stouse

We have just destroyed, even with all of the new opportunities that always attend new technology and all this stuff. It's all true. It's all real, right? We will have just destroyed the economic base of the world there, right? I mean, who is going to have the money to buy the stuff. I mean, this is where you're over-optimizing the situation, right? Right. And unfortunately, I think that for a lot of C-suites, right? They are so incentivized, personally, on the short term, that they look at it as, Hey, I can't really control the revenue side of the equation, but I sure as hell can control the cost side, right? And if I drop the cost significantly, and the and EPS rises through the roof, right? My personal take home is going to just go nuts, right? My wife's going to be happy. And by the time the blade starts to swing in the other direction and cut in not so pleasant ways, I will be doing something else. And so I think that that's a problem here. The last comment I'll just make about this is if If you want to understand the power of technology in history, look at the speed with which it is weaponized, right?

 

[00:09:10.690] - Mark Stouse

And AI, even LLM-based qualitative AI, is already being weaponized in a business sense to an unprecedented degree. The amount of content that's being written for LLM-based buyer apps that deposition, illegitimately, deposition competitors. And this stuff will only ever be read by algorithm. It'll never be seen by a human eyeball, right? So discovering it is going to be a challenge. And so you're going to end up with AI countermeasures, right? To AI bad behavior. And I think that you just have to to really acknowledge that a lot of this is going to be in profound conflict with issues of privacy, accuracy, credibility, trust, confidence, all this stuff. We don't do something like the Nuclear Regulatory Commission for nuclear. We're going to have a problem.

 

[00:10:24.400] - Dan Paulson

Yeah, you can already see that coming now because they can now, They can record something in somebody else's voice. They can have imaging that looks identical to somebody else and create videos. Obviously, some of those videos can be whatever it is, pornographic in nature or whatnot to discredit somebody. And I agree with you. It's going to be a rough period of time where it's Wild Wild West, and we're taking what's good of the technology and applying it in bad ways. And you led into one of my next questions, which was, when are we all going to be obsolete? When is the human race now?

 

[00:11:05.200] - Mark Stouse

I will just say this, and I realized that this is a very much of a philosophical position, but I'd like to believe anyway, that it's rooted in history, and that human nature hasn't changed a whole lot. The world around human nature changes constantly, and that's why things look similar, because that's the human element, but they're not similar, which is what's going on around it. So I don't think that AI... I think that part of the problem with AI is that it shouldn't stand for artificial intelligence. It should stand for augmented intelligence. Technology exists to serve people, not replace them, not have them serve the tech. So this is, I think, in the end, the human being is indispensable. And I fully acknowledge that some of the dystopian view probably has a lot of merit to it, right? Particularly in the short term. We tend to get ourselves into trouble, and then we We have to dig our way out of it again. That's another lesson of history. So I agree with that. But the idea that this will turn into the Terminator sky net scenario, right? I mean, that's an apocalyptic. So number one, I don't tend to...

 

[00:12:55.880] - Mark Stouse

History does not, typically, there are exceptions, Typically, it does not support the idea of an extreme. Clearly, the Holocaust is an extreme. Things that you never would have ever dreamed in a million years would have ever happened, they happen. But they are the exception. They're not the rule. I'm a big believer, and again, I may be eating these words sooner rather than later, but I believe that the center generally holds, right? That at the end of the day, instability is bad for everybody, and that people will move to stabilize, and that they will move to stabilize in ways that are not overly injurious to individual real human beings. This is one of the reasons why there's hardly any real monarchies left. Let's just remember this for a second. 130 years ago, monarchies, autocracies, were the number one dominant form of government in the world. Today, and republics were very infrequent, right? And we're now inverted on that, right? Like, radically. So I have hope, right? But the part that's the wild card is human beings can be remarkably unable to anticipate the future in the near term. No one can in the long term, but in the near term.

 

[00:14:57.260] - Mark Stouse

And this is one of the great values of regression and AI, right? Is it takes the human brain, which unaided, can't handle more than about three or four variables. And it gives it the ability to process far more than that very, very fast. This is what the Air Force and many other entities now call UDA, the UDA loop, right? The UDA loop is a decision making It's the speed with which you observe, orient, decide, and act. And the reason why in Top Gun, they talk about speed is life. The speed with which you can move through UDA determines your mission success and your survivability. And VUCA, which is all about volatile, unstable, all this stuff, that describes the environment, that's also a combat term, a military term. And so you need to be UDA within the VUCA, right? And that is where we are. I mean, you want to talk about something that describes the current state of affairs? That's it.

 

Give Ratings
0
Out of 5
0 Ratings
(0)
(0)
(0)
(0)
(0)
Comments:
Share On
Follow Us
All content © Books & The Biz. Interested in podcasting? Learn how you can start a podcast with PodOps. Podcast hosting by PodOps Hosting.