Ep 2 - The Potential of AI Language Models: A Discussion with Jamison Rotz, Founder of Nearly Human AI

Cold Takes Popular Opinions

Trevor Robinson / Ira Sharp / Jamison Rotz Rating 0 (0) (0)
Launched: Apr 28, 2023
Season: 1 Episode: 2
Directories
Subscribe

Cold Takes Popular Opinions
Ep 2 - The Potential of AI Language Models: A Discussion with Jamison Rotz, Founder of Nearly Human AI
Apr 28, 2023, Season 1, Episode 2
Trevor Robinson / Ira Sharp / Jamison Rotz
Episode Summary

In this episode, Trevor Robinson and Ira Sharp sit down with Jamison Rotz, the founder of Nearly Human AI, to discuss the current state of natural language processing and the potential of AI in daily life. Jamison talks about the value of AI language models in ideation and content creation but also touches on concerns about the authenticity of the content and the need for proof of life in the age of AI.

The conversation also covers the rise of large language models and how they are changing the machine-learning landscape. Jamison suggests that we should expect more competition in the space with smaller models rather than a fight between big players like OpenAI and Google.

The speakers emphasize the need for responsible AI and discuss the potential risks and benefits of AI, including the importance of investing in workforce skills and domain knowledge. They also delve into the future of prompt engineering and its impact on the information age, including the potential for prompt engineering to revolutionize the way businesses operate and the resulting changes in workforce composition.

The conversation concludes with a discussion of AI and data ethics, including proprietary data, good and bad data, and the ethics surrounding them. Jamison provides information on how to contact Nearly Human AI and himself, making this episode a must-listen for anyone interested in the potential of AI language models and their impact on the future of business.

Resourceful Links:

Connect with Trevor Robinson on LinkedIn
Connect with Ira Sharp on LinkedIn
Connect with Jamison Rotz on LinkedIn
Learn more about NearlyHuman.ai


 

SHARE EPISODE
SUBSCRIBE
Episode Chapters
Cold Takes Popular Opinions
Ep 2 - The Potential of AI Language Models: A Discussion with Jamison Rotz, Founder of Nearly Human AI
Please wait...
00:00:00 |

In this episode, Trevor Robinson and Ira Sharp sit down with Jamison Rotz, the founder of Nearly Human AI, to discuss the current state of natural language processing and the potential of AI in daily life. Jamison talks about the value of AI language models in ideation and content creation but also touches on concerns about the authenticity of the content and the need for proof of life in the age of AI.

The conversation also covers the rise of large language models and how they are changing the machine-learning landscape. Jamison suggests that we should expect more competition in the space with smaller models rather than a fight between big players like OpenAI and Google.

The speakers emphasize the need for responsible AI and discuss the potential risks and benefits of AI, including the importance of investing in workforce skills and domain knowledge. They also delve into the future of prompt engineering and its impact on the information age, including the potential for prompt engineering to revolutionize the way businesses operate and the resulting changes in workforce composition.

The conversation concludes with a discussion of AI and data ethics, including proprietary data, good and bad data, and the ethics surrounding them. Jamison provides information on how to contact Nearly Human AI and himself, making this episode a must-listen for anyone interested in the potential of AI language models and their impact on the future of business.

Resourceful Links:

Connect with Trevor Robinson on LinkedIn
Connect with Ira Sharp on LinkedIn
Connect with Jamison Rotz on LinkedIn
Learn more about NearlyHuman.ai


 

All right, welcome back to the Cold Takes Popular Opinion Podcast. I'm your host, Trevor Robinson, with my co-host here, Ira Sharp. And today we have a very special guest, Jamison Rotz from Nearly Human AI.

Welcome, Jamison. Hey, thanks for having me, guys. Yeah, this is a special episode for us because you are the AI man, or at least in our circle, you are the AI expert.

So do you want to give the audience a really brief rundown of what Nearly Human AI is and a quick synopsis of your background? And then we're just going to dive into some fun AI burning questions that people want to know about. Yeah, sure thing.

I've been in technology my entire career, started off as a software dev and founded this company 12 years ago. Like it's a consultancy to start, but we quickly ended up moving into the data science and machine learning space and specifically natural language processing. So it's been a really interesting run.

We started out when there was like next to nothing available in the space. And now here we are in the new large language model era. And so it's been a really great experience.

We've seen a lot along the way and it's an exciting time. About two years ago, we kind of like we started moving from just a services company into a product company when we landed funding for our current product at Bell Cortex, and it's a machine learning platform that allows companies. To take either models that they have us create or models that they create themselves and move it out into a production deployment and connect it to their business systems.

And it takes away basically, you know, like the million dollar lift worth of DevOps that it typically takes to get your machine learning technology fielded and connected with, with your business system. So that's the space that we're in right now. And with all the natural language background, you know, we're, we're really impacted by everything that's happened in the past six months since the launch of chat GPT.

I can only imagine, you know, how much it's, you know, just drawn attention to what you do and everything like that. So, you know, before where you're probably trying to explain for days, what this stuff is, you probably still need to explain for days what it is, but at least, you know, people have heard of chat GPT and kind of want to know more. Well, the questions are completely different.

It's like, you know, what, it used to be like, what do you mean by natural language processing? Right. You know, and it's like, how do these like things roll off of people's tongues and they're like, what's going on?

What, how's this, what's happening? Right. And I remember, I remember when I, when Jamison and you and I first connected, that was one of the first things I said is like, your business probably exploded overnight.

You've got that, you know, before you're in that era of marketing, there's probably a lot of concern or questions around this space. And it just seems like in the last couple of months, everything's accelerated. Almost everybody is aware of, of at least the concept now.

So it really probably propelled that conversation forward for you. It absolutely has. And like I said, the questions now are a lot more interesting because everyone is, you know, open AI allowed people to just like get in there and touch it.

Right. So like now people have, have actually been able to. Interact with some of this technology.

And so like the thought cycles on this, like completely changed because now, you know, you see it, you feel it, you at some level of understanding firsthand. And then people are like, especially like from a business to business perspective, and that people are like, wow, how do I, how do I use this now? Yeah.

And we'll dive into a lot of questions that we have around this. I just want to know off the bat. I mean, if I were in your shoes with AI and how fast things are evolving, it seems like every week there's a new tool.

And then, you know, in the next week that, that previous new tool is now extinct because everyone's building on top and developing so fast. So given the fact of your building in this space and you're building a pretty big problem and solving a big problem, does this, does this give you anxiety in general at the, at the speed of development or anything of that nature? You know, it's funny.

I think that like maybe six months ago, a little bit more so than now, even though it's moving faster now than it was six months ago, we are sort of in a fortunate space where we're building platforms that enable all kinds of models, including the large language models that are coming out now. From our business perspective, like we're actually able to like move with the latest technology. So I'm not building something that is in danger of like immediately being replaced by exactly what you said.

I mean, it's been, it's been really exciting because I've never been in a space like this before where it's like, this is almost like a business strategy all of a sudden where it's like, you, you just, you see where we are and you imagine where we're going next. And like the next week, there's a headline about how that technology now exists. Right.

And so it's going so incredibly fast. And I really thought that, you know, when I, when ChachiPT first came out, I really thought that we were going to see a ramp in a plateau, right. Because that's, that's typically like how this works.

If you think about like, you know, automated vehicles, for example, right. Like, like the 10 years ago, right. Like the Tesla S took us to like 80% of like where we are now.

And now we're 10 years on still trying to solve that final 20%, because these are really complex problems or safety considerations, the whole thing. But what's different now is that in this, especially in this natural language space, and we can talk about like some of the special sauce that open AI has really cooked up that's, that's launched this forward, but in this space, when, when GPT-4 hit, I had this like revelation that this one's not going to slow down quite the way that we saw before. And the reason why I came to that conclusion is because GPT-4 leveled up so dramatically from like the GPT sort of 3.5 that was running behind ChachiPT.

That it was astounding. And, and I don't have the inside knowledge of open AI to know exactly like what the vintage of what ChachiPT was running versus GPT-4 to know exactly what that timeline looks like. But it was astounding.

It, it surprised me the level up that they took and what's going on with GPT-4 now as they get ready for GPT-5 is not only do we sort of like expect, you know, kind of like a similar leveling up, but now we actually have like the LLMs writing algorithms that are going to fuel the next generation of LLMs. And so I think that this like, this continues on a pretty steep path going forward. But that being said, like, I think what quells my anxiety is that we're seeing a lot of first use that is the media is really fast on this cycle right now.

So, there is some value to just kind of like taking a breath, holding back and watching what happens a bit so that we can understand like where the real business value is, and that's where we at Nearly Human are at. Focus where the rubber is going to meet the road, right? So a lot of this stuff is cool and interesting, but that last 20% needs to be solved before businesses can start to rely on it to actually like change their business models.

Yeah. What I find fascinating is that you do have a lot of tinkers out there that are, that are kind of leveraging this, but the fact of how far, like you said, the advancement came with chat GPT-4 and the fact that I don't even know what percent of the API tokens have been deployed yet, because I know they have a really long beta list and I've been on it since day one. I know some people have gotten it.

And so it's scary to me to think that it's probably not even 10% yet have been released. Right. Think of the evolution once that can get in the hands of, of many others.

I think we're going to see that, but to your point, I think you're building something on the, on the, on the roots or the backbone of a really, you know, long-term sustainable use case. Versus some of these other innovations that are popping up are solving, some are solving problems, some are just really, you know, interesting where I'm fascinated and where I, when I talked about this, you know, in previous episodes is that just the possibilities, right? So I'm going to put you on the spot, Jamison.

So what do you think is probably one of the simplest benefits that you just say AI in general can, can do for people on a daily basis. And then what's something that you think, you know, is the most exciting thing for you, for people on a daily basis, and then what's something that you think people aren't thinking about that could be a bigger use case, you know, in the future, if that makes sense. So on a day-to-day basis, I think the most exciting thing for me, and it's like, you know, this is kind of a way that I use it personally.

I don't know that I see people explicitly talking about it in this sense, but what I do love about it as it sits today is from an ideation perspective, it's it as a reinvent the wheel preventer, right? Because like every time you come up with an idea, you can sort of like go to chat TPT and in 30 seconds, understand what the consensus of the internet is around that idea, right? And you can even tell it short bullet points, right?

So the TLDR even goes away. And, and so what that does is provides this really interesting baseline knowledge for everything that you're working on. And I love that because, you know, I think every like innovators, it falls into this trap of like wasting cycles on solved problems just because they're new to them, right?

And like washing that out right out of the gate and leveling up your thought process on the problem that you're trying to solve to like, okay, now I understand sort of like consensus. There's all of these bits and pieces that were actually out there that I wasn't aware of. Now I am.

And then I can take that to the next step and say, okay, well, like what thought cycles are actually novel in this and where can we launch off from here? And I think that I think that's a valuable way to look at it. I do think that the content creation side of things that people are like actively using it for now is a little like underwhelming when you look at it long-term because I feel like that's just essentially going to like whitewash the internet with pontification.

So right now what's running rampant is you can see comments on LinkedIn are just AI generated. And the big problem is they're too long. There's too much stuff and it's just not natural in how a human does it.

So I think, you know, this is a little thing I'm noticing. And it's very same-ish, right? So it's like as if your feed wasn't mind numbing enough, overwhelming enough to begin with.

Right? Yeah. And the way people are, I'll give you a little inside baseball here of what the smart people are doing on LinkedIn to really stand out is, you know, in the current state of the algorithm is really living in those comments, right?

That's something that's picking up a lot of steam. However, to stand out now, a strategy is really, you know, a few people are doing effectively, they'll catch waves is actually recording loom videos as responses to comments so people can see your face, they can see you and then you're not lost in was this AI generated? Was this not?

Because I think a lot of people are just going out AI and posting it out there. And like you said, I think it's just going to saturate the internet with like garbage content and originality and getting your face and voice out there is going to stand out a lot more. And you put your finger right on the spot of something that I think is a 100% bet going forward when I look at this.

And that is that as all of this happens, right, like human authenticity is going to become more and more valuable as it becomes less and less of a percentage of what's out there, right? And so right now, like those lube videos are generally more authentic because it's proof of life, right? That's exactly how that's not going to last long, right?

Because now if you go out with a 30, 40 second recording of your voice and a picture, you can deep fake a video of you saying whatever you've told chat GPT to say, right? So as that, because that technology is like pretty good. If you look at some of these things, like it's, it's pretty compelling already.

So once we lose the proof of life on video takes, then that problem becomes more interesting. And I think one of the things that we have to solve going forward is this proof of life issue. Like, and I think about that in terms of like, well, what do we do?

What do we implement? That's like unscalable in nature and like connects directly to a person so that you do have an understanding of this. And you know, this is a very deep psychological need.

I think that we have like as humans that regardless of what happens in AI, we're not going to evolve away from. And, and so I think that like, as we think about business models going forward, like the actual, like, like the actual person to person interaction, like what we're having right here is something that will remain valuable no matter what happens. Yeah.

Mommy, I generated, so I don't know what you're talking about. So, so, you know, on the content creation side, I agree. I think it'll become, I believe it'll become evident who's doing what.

I mean, it's already kind of out there and, and it's going to become more and more. It is interesting because the whole idea is that this AI becomes, you know, you, it becomes, you know, your bot, you know, and, and so I dream of the day that I can take the mountains of notes and, and flow them into a learning model. And then I can just query for stuff that I've learned over the past 18 years or 20 years and be able to get the responses that I forgot.

It's going to be awesome. But I am curious on this whole topic because I know in some other countries, even they're starting to like either outlaw it or try to figure out how to govern it. And, you know, I don't know how I feel about that.

I feel good because, you know, when you look at some of these language models, obviously open AI, for example, was taken in tons of information. You know, it's a big reason they're making it an open platform because they can collect information from everybody. And that's a huge security risk.

That's huge IP risk. But like the answer can't just be turn it off or over govern it because there's going to be people that advance it regardless of what anybody does. So it's an interesting spot because I feel like we've opened this box and I don't know how I feel about it.

Like, should you close it and put it back in the closet and forget that it's there? I don't think you can do that. But like, should you evolve it?

Because then it'll become something maybe you didn't expect it to be because things are advancing so fast. And yes, people are using it for content creation and that's great, but people are also doing it to develop some sort of script to try to spoof IP addresses of the biggest state actors that there are to cause national damage. And it's a bit of the early days of the internet right now with this whole AI stuff.

It's kind of crazy. Even on that, Ira, really quick is that you got these situations, like, I don't know if you guys have seen, I think it was Drake and The Weeknd. Drake and The Weeknd, somebody created an AI song with both of them on it and it went viral on every social media platform.

And it sounds super realistic. And it's like, what do you do if you are those artists? How do you stop this along with the Joe Rogan AI experience?

If you guys saw that, that was crazy. Yeah, there's all these layers. So what are your thoughts on this, James?

Well, first, Jamison, before you go, I hadn't heard that. So can you sing it for us, Trevor? Yeah, let me know.

No, no, no. No, so in the past two weeks, I've seen some really promising things happen, and I think I can see my way through it now. And this is new for me, so it's exciting.

I have like waffled back between from extreme optimism to extreme pessimism. And now I'm optimistic again, which is my nature, full disclaimer. So, you know, you're not.

Is that in the context of a day or is that like? Yeah, for perspective, that's probably in the context of like the past two months. When it first came out, I was like, oh, this is this is great.

This is really, you know, and then it got really dark for me because I started seeing like some of what was going on. But here's the thing, and I think that like to your point, like this whole talk of like, oh, let's slow down and wait for regulation is like first off insane. When has that ever worked?

Right. So like doing the same thing and expecting different outcomes. Right.

And and secondly, to your point, like even if certain organizations or certain countries decided to take that approach, like you would never get worldwide consensus on this. And and if you're if you're most responsible entities are the ones slowing down the most, then that automatically means that like your least responsible entities are the ones that are going like mad. And and that's not a good balance either.

So what's what's good about this, OK, is in the past past month or so. Right. It's become very clear that so open AI, you know, what these large language models do is is like is very mundane.

Right. Like they're just predicting like the next thing to say. And it's based on this massive amount of data that they've trained on.

And there's a lot of costs associated with it. Why it took open AI, you know, billions of dollars worth of Azure credits to like to train this thing. But but what gets interesting is like what what open AI did was that they figured out like this.

I mean, this large language model thing has been going on in research for, you know, for 10 years for sure. We we lack the processing power to really like get after it in a meaningful way until maybe like the past five years or so. In theory, this is you know, this has been around since I forget.

I mean, some of these early like theoretical large language models are from like back in the 40s. Right. So like the but what open AI was able to crack in a very interesting way was how you like tune these predictors over all of this language that gets it to like act creepily human.

Right. Even though what it's doing at the root of it is is really mundane. So they solved that problem and like created the art of the possible when they launched chat GPT and got this in the minds of the world.

Right. But since then, and this is like this is an absolute theme in machine learning, right? Like is that when you get cute with machine learning, it always allows you to kind of like skip steps.

And the way that you skip steps is like is finding the signal in the noise. Right. So you see like the coon to come out from Facebook and all of a sudden it's just like very small, like large language model that you can train effectively and cheaply.

And then the idea of like the reinforcement training that comes into play with these things. And all of a sudden you start seeing more and more large language models come into the space. And so I think the way that we should handle this going forward is that we expect instead of like a fight to the death between open AI and Google, right?

You know, who's going to take over the world. You know, they're going to move in the AGI, the generalized artificial intelligence space based on a lot of information and a very broad scope of what they're trying to attack. But the fact that we've got these smaller models, and this is an area that like nearly human is leaning hard into is that they're going to hit a wall and they're going to be up against smaller models.

The training mechanisms that we use for these models are going to become like orders of the magnitude more efficient. There's already approaches that people are starting to take to say, okay, well, if open AI is like training this way, you know, how do we tweak that and like reduce our costs by 10 or a hundred times? And, and then it's actually working with, you know, maybe not like the exact same results, but 80% of the results, which is by far good enough for a lot of applications.

So I think if we take this approach where we like give the court systems in our countries, like the, the power and the authority to sort of like govern these human acting models, the same way we govern humans. So that if one of them like breaks a law, you know, the people involved are able to be prosecuted. It gives some interesting legal teeth to the actual deployment of these things.

And then we allow it to continue, but we support all of these different large language models coming in. And then now you start to have competition in the space. And that does two things.

One, it creates this little interesting nuance for the large, the big players, the AGI players in that once companies like what Bloomberg did this, right? They took all their financial data and they trained a private LLM. This one, much higher quality data in a narrower scope.

And if you think about that in terms of where this goes, you have different companies in different spaces that have access to this, to this data corpus that they've built up over years. They're going to close that off, right? You're already seeing like, you know, Cora and, and then Reddit, like closing down their APIs to keep, you know, large language models from just sucking all this information out.

And that's in preparation for some of the, their own things that they're going to start doing. So if we get more of these LLMs out there in place competing with one another, then you have options. So if one of them does something that we don't agree with on a legal or ethical basis, like you have a court shut them down and while they're under like investigation, and that gives their competitors the ability to come into that space.

And that's real capital market motivation for companies to become more careful. So I'm really hopeful about like where that ends up going. But I mean, do you really think it's possible?

I mean, I agree, you know, there's extreme value in getting these small language models and these kinds of things, but if they're exposed at all, which if you think they're not exposed, they probably are exposed just because, you know, there is no such thing as an air gap system and things. Don't you think that there could be something built to interconnect them, you know, to learn from each other? I mean, you're already seeing that with, what is it, auto ML or something like this, where GPT, yeah, where it's, you know, it's whole idea is to destroy the world or something.

That was chaos GPT. Chaos. We've gone auto GPT.

Yeah, yeah, yeah. But I mean, despite what it is, I mean, that's that just did like it's all moving so fast. And then if you look at governing things, it's just I see it being very, very difficult.

And I think we're all saying the same thing, but it has been done with like medicine, right? We've done it with like, you know, peace treaties with nuclear bombs and these kind of things. I don't know.

I look at this and I go, is it to that level? I tend to think that it kind of is. It can be right.

And I've just, everybody that I've talked to, I encourage, especially the, there's a ton of responsible AI chatter out there. Right. And that needs to become more than chatter.

And I've just been encouraging folks that are like working in that advocacy space to bring the lawsuits. Like it's the only way to force the issue. Just like Sue.

Everybody's got to start getting this into the court system and forcing the courts to like make opinions on this. That's going to be a long process, but not as long as waiting on regulators. Is that a U S centric thing though?

Like, does that matter if you're another, it's going to be the ones, right? Like, because that's, those are the, the, like the sort of like democratic spaces that have these port systems that can absorb this kind of stuff and get to work on it. If you look at like autocratic regimes and stuff for one those, you know, if you look, so you look at China, right.

And China is already starting to like wrap their arms around it in a pretty substantial way. But the other thing is from like an information perspective, China is pretty isolated from the rest of the world because they're not getting stuff in, but none of this is, I mean, like, but if you could have pipes, I'm sure, I mean, they know what's going on a Facebook. Like you may not be able to look at Facebook there, but they know what's going on on Facebook.

A hundred percent. And that's where it gets a little funny. And I don't know that, like, I do know the good part about this is also like from a worldwide perspective, like, because of the way China's because the way China has operated, like we can see the data going in and coming out of like China's like, you know, state firewall essential, essentially.

So that can be monitored and, and it may, it may come to the, I mean, you know, this is, this is a whole other geopolitical thing. Right. Because as tensions rise between China and Russia and the rest of the world, you know, like cyber warfare and information warfare is absolutely going to be a thing in the future.

And that's something that's going to develop alongside of this. And like the US and EU and other like free nations are going to have to like sort of treaty up around this. I think, do you feel that the, the government in general is probably gonna have to hire, I mean, to, to gain the competence around this.

I can't imagine that the current government and state has the competence of the experts that actually truly understand, you know, the impacts of this technology. Like, do you think that that's going to be a big, a big wave in the future here of them having to level up? They're already, they're already hiring.

And quite frankly, there's two things that there's two things that are happening in their favor right now. One is the massive Silicon Valley layoffs. Yes.

And that coupled with the fact that like venture capital is all sideways right now. So like company formation in that area has just like slowed down by orders of magnitude. Right.

So there's that going on, which is freeing up talent. And the other thing is just the social backlash around sort of like the VC frothy fraudulent stuff that's just like making headlines left and right has created a generation of technologists that want to do some good in the world. And so I think, you know, between the government, like leveling up on their hiring and offering some more competitive salaries in the place.

And I've heard stories, you know, there was a, there was a podcast from the head of the FTC that was stating that like, they have been hiring top tier technologists at a rate that he's never seen. And they don't care about the money. They're like, I feel passionate about like making the world a better place and thwarting some of the potential risks around technology.

And I'm here for it. Right. But, but doesn't even matter.

I mean, like seriously, because you got to think top tier technologists, you're hiring them into the government, you're trying to regulate all this, but you can learn out of the cliche of your parents' basement and the best, the best Silicon Valley tech company started out of various garages. Right. So why couldn't you be in the middle of Russia in a basement, learning all this stuff to figure out how to be the bad actor?

I think it's there. Right. And it's absolutely there.

And it is absolutely there. And that is the biggest risk that we face. It's like, it's the auto GPT thing, right?

Auto GPT is, is an amazing concept. There's not sort of the, this kind of like creeps into your world, Ira. There's not sort of the interconnectivity between all of these services to allow that just like run rampant and like spiral at scale yet, but it's coming fast.

Right. And so those are the things where again, right. Like the, these, the, the free market governments and court systems like need to be able to like get after this stuff as they see it happening and, and call attention to it because we, you know, one of two things is going to happen.

Right. Like, and, and this is like, this is the, the weird thing about it, right. Is it like, we don't want to slow down because that seems counterproductive, but if we don't proceed with caution, there is going to be some event that causes it all to stop because it's just too bad.

Right. I think that's what we're hearing from a lot of these vocal guys. I mean, lover, Haiti, Elon Musk, you know, he kinda, you know, he's been out there pushing that narrative, you know, whether there's a hidden agenda behind that or not.

I'm all on my fields on this one. Everybody slid out in six months. Why launch three more companies?

Yeah. Point is valid. I want to shift gears here.

I have two questions that, you know, Ira and I here were debating in some past episodes. We'd love to get your opinion on here. So the first one here would be when you look at this and you look at the, the, we'll just say for lack of better words, the impact and the benefit that companies can have, you know, from the just, just AI in general, right?

Let's just not even go chat, GPT or whatever, just AI in general. What do you feel right now is like the window of opportunity that companies have to get their act together, to start investing in this. Whether that's bringing on new people to leverage something as simple as chat GPT, or whether it's, you know, hiring a company like yours to come in and implement it and so forth.

What do you realistically think that window of opportunity is before they just get left in the dust and then they're kind of fighting and kind of like they lose the competitive advantage, right? The first mover, what do you think that window is? Well, I think it really depends.

So from to your point earlier, I think if your strategy is to simply implement like the latest technology, you know, by the time like you get that in place, like just the fact that you have that latest technology, it's not going to be the latest technology and like that value just like goes downstream, like immediately. Right. So I think that's a bit of a fool's errand.

I think that like the two things that I recommend companies really look at, one is leveling up your workforce in terms of like their skills and exposure around this, right? That is something that like you, the investment in the exposure to this technology won't go away because you do risk, you risk getting left behind very quickly if you don't spend some time just kind of staying up to date with what's happening because it's going to continue to accelerate. That being said, like don't freak out, right?

Because like if you don't have the exact like right strategy or you feel like, oh, I'm already behind the eight ball, like you're not because something's going to come out in next week or next month that's going to like negate 75% of what you thought you needed to know in November of last year. Right. So, so I mean like that, that introduction and the, and the awareness of this technology within your workforce in general is a really good thing to invest in.

The other thing that really, I think defines that like window of opportunity gets back to what we touched on a little bit before, and that's like your domain knowledge in your business space and the data, this corporate knowledge and the data around it that you have collected. And the, the size and quality of that corpus is going to be directly related to the length of your runway in terms of corporate advantage, because that's the thing that your competitors cannot recreate simply or easily. So it really comes down, it's going to come down to who has the best information and has the most forward thinking approach to leveraging that within their company.

Yeah, and it comes in, it's very, you know, relatable to when I work with a lot of companies and marketing teams, there's a million options that they could do. And the first thing I always see is we need to launch a podcast. We need to run ads.

We need to do, and, and it's slow down, right? Just because everyone else is doing it. There's always going to be a new flavor of the week, right?

We have to understand, and this gets hard even in this sense sometimes for them of like, what is your goal? Let's look at the resources you have, the assets, where are there gaps? Because the strategy could change.

And to be honest, in most cases, the best strategy for most of these companies are just the most basic ones. It's getting the foundation in place before hopping into that shiny object, so to speak. And it sounds like this is going to be very similar parallels where they have to bring in somebody or have that domain expertise that can kind of guide them to say, we know when to sit and watch, we know when to dip our toe.

And then, you know, with full confidence, we know when to lean in, but then even when to pivot or kind of like those like safety valves, so to speak. Does that sound accurate? Yeah.

And I think the other thing that like companies need to reshape their thinking on and really like challenge their mindset around it is the one thing that I think is happening, right? Is this, in the tech industry, there's this idea of like a 10X developer, which is basically means, right? Like these are these like cream of the crop developers who like their knowledge and their methodologies like just make them 10 times more productive than maybe the like the typical developer in their field.

But I think that like bringing this technology to bear and the ability for LLMs to be writing code and like implementing again, right? Like you can see what the consensus implementation of virtually anything is right now by just telling it to like write the code, whether or not you can cut and paste it and deploy it, which in some cases you might, or you look at that code and say, okay, this is what everybody else is doing. And here's where I add my special sauce.

Challenge companies when they're thinking about like prototyping things and like dipping their toes, like you said, you know what? Go out there and look for firms and developers that look 10X because there's going to be more and more of them as they learn to leverage this technology. The 10X developer is going to become like the new normal.

And that's exciting for figuring out how you're going to leverage your business advantage based on like the data and the knowledge and the expertise that you have. So yeah. And I agree with all of that.

And I think about this from a company perspective and there's the 10X developer and there's that tech approach, but you know, this is not going to sound good, especially as this podcast comes out, but I see it as eliminating jobs. And that's kind of the goal of it, right? But I don't really see it as eliminating people from working, but eliminating jobs.

So for example, if you had 10 people working in an HR department, maybe you don't need 10 people in the HR department. Maybe you can have one person that satisfies that with the augment of AI and utilizing this. Now it doesn't mean those nine other people lost their jobs, but they could be utilized for something else in the organization.

And I think as we move forward with this, it'll allow companies to better utilize resources while investing in this type of technology to make it better and better. So just like the 10X developer analogy, I think you can do this so much with so many different things for like onboarding people and all these other tasks. And I see it no different than looking at the industrial world.

And you looked at, okay, going from a standard manufacturing line where you had a line of people assembling a car, and then you removed a lot of those people when you augmented them with robots. It doesn't mean those people lost their jobs. They were just redeployed to other types of things to further increase their productivity and ultimately the ROI and profitability of organizations.

I see this as happening from an information age. Maybe this is like the next industrial revolution for just information. Information and creativity.

I completely agree with you. We've seen this time and time again. It is true.

Both things are true. There's going to be this huge transition period where we're just going to redefine what it is that you need to run a company. And that is going to have huge workforce composition changes on a national and worldwide basis.

But I think that the way I look at it, and this will become clear as time progresses, but to your point, Ira, this touches everything. I guess as a business owner and thinking about it in terms of the C-suite of companies everywhere, it's just that like, well, okay. If you think about 10Xing your information staff, whether that's marketing or whether that's coding, where you have these opportunities, you just step back and say, okay.

To get through the thought process on that is like, all right, so let's pretend that it's five years from now and this technology has reduced my IT spend and my marketing spend and down to 10% of what it is right now. So if I just assume that that's going to be the case, what would I spend those dollars on? What's the special sauce in my business that I know if I had that extra budget, I could elevate and destroy my competition with?

And I think that just right there defines your five-year strategy, right? Because it's going to be a glide path to those kinds of savings. But if you think about it in that terms, it's going to help you come to the conclusion of what jobs those are going to be.

I guess back to the domain knowledge, right? If you're able to reposition your staff who has a bunch of domain knowledge into this new strategy that you see once you have budget for things you don't have budget for today, that's probably a winner, right? Yeah.

So it's a great segue into my second question. I want to get your opinion on. So all the rage right now is these prompt engineers.

And you see these popping up with insane salaries. I already know, I talked a little bit about this. I think personally, I think it's a short-term trend in the mass.

Cash those checks. Cash those checks. Okay.

Because here's how I'm thinking. I actually disagree on this one. I think it's- Let me explain my view and then I'll let you have your shot here.

So my take on this is that prompt engineering, it's common sense to me. I think it's just, if you want to get an answer to something, you got to give a detail and it's a feedback loop. Not giving you what you want, get more specific, right?

But I think, and again, I don't know much about these machine learning modules and so forth. But let's just take a company example for, you bring somebody on that's just onboarding HR. You could have a prompt engineer that helps do all this stuff.

Or is it going to get to the point quickly where machine learning understands exactly what you're trying to ask? Kind of like Google, right? Because when you Googled 10 years ago, it was essentially prompt engineering.

When you Google now, Google knows what I want before I even want it, right? It's getting to that level because of all these data points and with the algorithms and that machine learning in the background, I guess. That's my take on it.

Ira, what's your, I know you're very passionate about this. Well, so I actually think Googling is still a skill. It's a skill.

Yeah. And yes, you can Google when you can find things, but you can find things and then you can really find things. It depends on what you're looking for to how granular you can get.

And I think the idea of prompt engineering, if you think of chat GBT, I think a lot of people use it like Google right now, but that's not really what it is. So if you're thinking about it in that context, that's, I agree with you. But, and if you, and then if you add in like the general templates where you're like percent pretend you're this or consider this and act like this and then do this kind of thing, I agree with you.

But when you really think about prompt engineering and you think about the massive datasets that are out there to really, and I, and I see this being something that lives on to really be able to get the data out that you truly want, you need to really be an expert in your field and really understand what it is you need to articulate. And in crafting that kind of prompt is not just as simple as Googling, you know, it's, it's really looking at, well, Hey, can you consider this, this, and this, and then look at these kinds of things here. It almost becomes programming.

It's almost a new form of programming through natural language. It doesn't mean other people can't do it. Doesn't mean people won't be successful at it.

Just like Googling. I mean, you know, people that can Google and get information and then you Google and get information and you can probably do it more efficiently than other people that you know. And you probably know people that can do it more efficiently than you.

There's like this scale. And I think prompt engineering is that on steroids. So just like, just like you, you know, you have these massive datasets and things.

I do think it'll continue to be a trend and the best people at prompt engineering and they may be niche in particular industries will continue to be very, very good. That's awesome. It's quite niche.

But then I'm curious what your thoughts are. I think just the machine learning after, you know, it's going to understand what you're looking for. So you're not going to have to get as complex in the answers.

I don't know. What do you think Jamison? Well, the reason I think it's going to become commoditized and the value of it is, is going to be reduced is because so this, and actually this gets into like, we just released version 1.12 of our system on Friday.

And what's included in that release is actually a system to provide prompt engineering automatically. So what our system does now, because, because we saw this exact, this exact issue, right, is that it's, it's, it works. So you can set up like models that your company, that we can train to understand like the domain of what you're working in.

And then instead of calling GPT directly, you call the endpoint on our system. It will automatically make the prompt engineering calls out to set up the context and then it'll pass your question through. And so it creates these guide rails, these, these subject matter guide rails into these, into these large language models that will allow someone who, again, to your point, like still the ability to like articulate your question in a way that you know is going to get the proper response is still valuable.

But I think that like, there's going to be like a leveling effect within organizations where, where people can relatively quickly learn how that should work, especially with some support mechanisms in place by third party providers, like what we're doing, what others do as well. And then couple that with like what we talked about before, where you end up getting like domain specific LLMs. So now, you know, I think the AGI might actually look more like, so when you like, you call our platform, it might run a model to like, to like set up the prompts with your LLM to ask the question as well as like watch for like proprietary data stuff and like pull that out.

So you're not just spilling your guts to chat GPT, but also, you know, but, but like, as this goes, like, it doesn't necessarily just have to go like specifically to GPT, you know, it can know that, Hey, if I have this intent coming through, I have this, like, Oh, you know, like I'm asking a business question, this should go to my subscription at Bloomberg's LLM, right? Or I have this hearing question. And I have all I have an engineering body of knowledge that's either within my company, or maybe I outsource that to an industry leader that has the corpus of data, and I pay a subscription there to be able to like use their models.

And so the AGI then starts to feel is more of a network of providers of choice, as opposed to just trying to go to a one stop shop that's like relying on the like cesspool of the internet to figure out what it's talking about. Yeah. So I don't know.

I don't know who the who the right answer is there. I guess we'll have to put a reminder to follow up. I'm right.

I'm right. So yeah, so that that is a lot of information today that we we went over. Jamison, we would love to have you back again, because I'm sure we'll have tons more questions.

One that I really want to hit on next time is what you mentioned about, you know, how this data flows through with proprietary data. We saw the whole thing. I think it was Samsung.

We have, you know, all the ethics around data and what's good data, bad data. So those are all topics that yeah, so those are all topics I want to talk about a whole separate episode. In the meantime, let us anyone who's listened to this, where can they find you?

What's the best way to get in contact with you? And I guess, you know, give a plug for nearly human AI. Absolutely.

You can find us online at nearly human.ai. And if you get on there, you can read about us, sign up and we can answer any of your questions or reach out. I'm on LinkedIn, LinkedIn slash Jamison rocks.

And you'll find me out there talking about all these things as well. So you can always message me there and I'll be happy to get back to you as well. Awesome.

Well, I appreciate you coming on Jamison. Any final words, Ira? I know.

I just, thanks. Thanks a lot, Jamison. It's always great to talk to you.

And yeah, I look forward to having you on again and continue in this chat and about all things AI. So it's, it's always a good, it's always a good time. Anytime guys.

This is great talking to you. Yeah. Now I got a lot of things to go and think about.

All right. See you guys. Thanks.

See ya.

Give Ratings
0
Out of 5
0 Ratings
(0)
(0)
(0)
(0)
(0)
Comments:
Share On
Follow Us