Artificial Intelligence (AI) is no longer a futuristic concept—it’s here, and its transforming industries, investments, and daily life. Despite the advantages in processing and productivity, many still have concerns about using AI in their everyday lives at home and in business. This begs the question: Should AI be something we fear, or is it just new technology that we should embrace?
Here to help answer those questions is Jay Jacobs. Jay is BlackRock’s Head of Thematic and Active ETFs, where he oversees the overall product strategy, thought leadership, and client engagement for the firm’s thematic and active ETF businesses. We’re thrilled to tap into his expertise to break down the evolution of AI and LLMs (Large Language Models), how it’s impacting the investment landscape, and what the future looks like in the AI and digital world.
In our conversation, we discussed the rapid development of artificial intelligence and its potential to revolutionize sectors like finance, healthcare, and even customer service. You’ll also hear Jay describe how AI has evolved into a race toward Artificial General Intelligence (AGI), its ability to increase our productivity on a personal level, and whether the fears surrounding AI’s risks are warranted.
In this podcast interview, you’ll learn:
- How AI has evolved from Clippy in Microsoft Word to ChatGPT and other LLMs.
- Why research and investing in AI is accelerating and what’s fueling its rapid growth.
- Why access to data, computing power, and infrastructure are the new competitive advantages in the AI arms race.
- How businesses are leveraging AI to boost efficiency and customer service.
- The race to AGI (Artificial General Intelligence)—what it means and how close we really are.
- How synthetic data and virtual environments are shaping the next frontier of AI development
Inspiring Quotes
- “AI can create a more productive person across so many different industries.” – Jay Jacobs
- “Any sufficiently advanced technology is indistinguishable from magic.” – Jay Jacobs
- “If artificial intelligence makes you 50% more productive, companies are going to be willing to pay a lot of money for that.” – Jay Jacobs
Interview Resources
- BlackRock
- BlackRock on LinkedIn | Instagram | YouTube | X/Twitter
- Jay Jacobs on LinkedIn | X/Twitter
- ChatGPT
- Claude
- The Imitation Game
- Alan Turing
- Clippy
- Google Translate
- Gemini
- Anthropic
- Mistral AI
- DeepSeek
- Magnificent 7 stocks
- Microsoft Copilot
- Tony Kim
[INTRODUCTION]
Matthew Peck: Welcome, everyone, to SHP’s Retirement Roadmap Podcast. I’ll be your host today, Matthew Peck. Now, if you are living under a rock or a cave, then maybe this is the first time that you’ve heard of the term ‘artificial intelligence’ or AI. Most likely you’ve not been living in a cave and you know all about it, or maybe not know all about it, but I’m sure you have questions about it. It has been and will be the changing technology or one of the most innovative and impactful technologies of our generation. I mean, maybe every 20 years, if you want to talk about the PCs into the internet and now into smartphones, now AI is really just that next iteration of technology. And that obviously has a lot of questions to it.
Where is it leading us? How did we get here in the first place? Is it as game-changing as they talk about and as revolutionary as they talk about, or is it just, “Oh yeah, great, we can be able to read tweets that much faster or surf the net that much faster”? So, to help unpack and kind of untangle all the questions that you might have, that I certainly have, we’re joined by Jay Jacobs. He is BlackRock’s Head of Thematic and Active ETFs, and he’s been in the industry now for over 15 years. So, we couldn’t think of a better guest to have on our show to help us all understand this world.
Matthew Peck: And without much further ado, Jay, thank you so much for joining us and I really appreciate your insight.
Jay Jacobs: It’s a pleasure to be here, Matthew.
Matthew Peck: All right, Jay. So, I know sometimes I start off with a sort of like the cocktail party type of conversation. So, you’re at a cocktail party, wherever that may be. You’re slowly sipping a drink. Someone walks up to you and says, “Hey, Jay, what do you do?” And you immediately say, “Oh, thematic ETFs,” and you mentioned AI. Then the light bulbs go off. What’s the first question that you would get in a cocktail party about AI and how do you generally answer it?
Jay Jacobs: Well, can I just start by saying I have a 15-month-old and a two-month-old at home, so I’m not going to a lot of cocktail parties these days, although I’d love for an invite in the future but I’m not ready to get back out there. Look, I mean, I think as soon as AI comes up, a million questions come up. It is in the news, everyone’s thinking about it, a lot of people are interacting with it. I often get two questions. One is, how are you using AI? You seem to be the expert in that. How has it changed your everyday life? And the second is, is this real or is this just some kind of hype thing that’s coming out of Silicon Valley and ultimately it’s not going to be as big a deal as people are making it?
Those are the two questions. CliffsNotes answer, “I use it a lot and it’s continuing to grow. And no, I don’t think this is hype. I think this is real.” But I’m happy to dig into those answers in deeper detail with you.
Matthew Peck: Well, Jay, actually, let me pick up how you’re using it because I certainly want to talk about professionally or economically how businesses are using it in different industries. And I think we’ll certainly get there. But just on a day-to-day, I mean, for someone that’s as I’d say more aware than your average bear on AI, how about personally? I mean, has it got to the point that you’re actually using it to book flights and things like that? So, I’ve heard there’s AI agents, I mean, let’s say you’re a retiree because a lot of our listeners are retirees. I mean, are people using it in their day-to-day?
Jay Jacobs: They are. It takes a little bit of an adoption hump. You have to find an AI large language model like ChatGPT or Claude that you like, that you kind of get used to, you understand where it works. What are some of the pitfalls? And then just kind of like at some point we all started doing email and we all started using search engines to find things online. We’re all going to start using artificial intelligence but it takes a little time. It doesn’t just kind of happen overnight. In my life, yes, I’ve used it to do travel itineraries. I find it really useful as a starting point not to make this entire conversation about my children, but it’s top of mind. We did a babymoon, my wife and I, before the first one came.
I was a little bit behind on the planning and I put into ChatGPT, “Plan me a trip to Italy. We’re going to go for seven days. We can’t do big physical activities because my wife is pregnant,” and it came up with a great agenda. It’s not a great itinerary. It’s not 100% but it’s a starting point and you can edit from there and kind of continue to evolve it. So, that’s just a great use case for people who are retired, looking to travel, and want kind of that starter pack for thinking about artificial intelligence. Secondly, though, I use it as a knowledge base. There is so much content out there. There’s so much information. There are so many, hate to say it, Matthew, but there’s a lot of podcasts out there. How do you make sense of it? How do you kind of distill the right answer that you’re looking for?
Artificial intelligence can be really useful. If you just think about on a search engine, you might search for what’s a good ETF, right? Or they should be asking you, but if they’re searching for what’s a good ETF, the search is going to show you 20 answers and you’re like, “I don’t know.” If you type into an AI large language model, you might get a much more precise answer, or you might get a more thoughtful answer about the differences between an ETF and a mutual fund, or explain to me what a futures contract is, whatever it is. An AI large language model could be really helpful at distilling information, packaging it up for you in a very easy-to-consume way.
Matthew Peck: Well, it’s funny you say that because it’s one of these like eureka moments or however you want to put it over the past couple of months for me, personally. Again, not talking what I’m using it for a job as a financial advisor and a business owner and things like that, but more so that there was a condo that I was associated with and they had all these bylaws that they were changing the bylaws. It was like something to do with the rentals and about the ability to rent. And so, here was this probably seven to ten-page Word doc of bylaws and here are the suggested changes and all these legalese and I’m like, “Okay. Could someone just summarize this for me?” And then all you do is click a button and it’s like, “Okay. Here’s what they’re summarizing to change and so forth and so on.”
So, it literally took a 10-page document, however long it took me, it would have taken me to interpret this, all that legalese and to be able to have it summarized in, I don’t know, maybe 30 seconds. It was really impressive and talk about a use case or at least just say, “Oh, this is that time that saves and this is where they talk about productivity.” Because I think, Jay, one of the big things to talk about, at least economically I’ll stay there and we’ll eventually go back to the beginning because I certainly have a lot of more basic questions, but they talk about productivity. I think that’s one of the biggest when it comes to hype or not hype is the ability to be more productive. So, speak to that a little bit, and is my example a good example?
Jay Jacobs: That’s a great example. If you look at a lot of CEOs in the United States, there’s really two ways that they think about AI. One is about being more productive, getting more from your employees. But if you’re spending a lot of time sifting through email, reading large documents, some of the more operational stuff around your job that feels a little bit kind of routine, maybe not the best use of your expertise, and AI can help with that or even automate it. It just frees you up to work on the higher value things, whether that’s talking to clients, whether that’s developing insights. Whatever it is, AI can create a more productive person across so many different industries.
So, that’s the efficiency, that’s the productivity story. At the other end of the spectrum is creating new features, creating new products that leverage artificial intelligence. So, if you are struggling at home at 9 p.m. trying to figure out what to watch on TV, a better AI engine would do a better job of recommending a show that you want to see. It’s going to take into account maybe more information. Maybe it knows that you had a long day on the commute, the train was a little late, I mean, you’re kind of in that mood that this is the right kind of show you want to watch versus it’s a Sunday night you’ve had a glass of wine, you want to watch a different kind of show. There’s different things that can get smarter and enhance a product by using artificial intelligence. So, there’s absolutely productivity story, but there’s also this growth and better product advancement kind of story.
Matthew Peck: Interesting. All right. So, we’ll definitely come back to, again, with the future holds, but to kind of go back to the overall structure, kind of I’ll use the Christmas Carol structure of the ghost of AI past, the ghost of AI present, and the ghost of AI future. Let’s go to the past first. Let’s start there. Jay, if you don’t mind, just walk us through the history of AI. I mean, I’m assuming it’s just like it was a breakthrough and it’s sort of been there for a little bit longer than people might imagine. Were there different names for AI? I mean, I think a lot of people probably are focused on like the sci-fi, science fiction, and all the movies and the books back in the day. So, let’s start as much of the beginning as we can in this world.
Jay Jacobs: Yeah. I like to say themes generally like artificial intelligence kind of happen slowly until they happen all at once. And that’s certainly the story with artificial intelligence. So, we have to go back to about 1950 to see really the beginning roots of artificial intelligence as a serious concept. If anybody listening to this has seen the movie, The Imitation Game, with Benedict Cumberbatch, he’s playing Alan Turing, he created the Turing test which was the idea that at some point computers are going to get so smart we have to develop a test to differentiate between are you having a conversation with a human or are you having a conversation with a computer. That’s really kind of the genesis of artificial intelligence as we know it today.
Now, fast forward about a decade, you start to see really basic industrial robotics come into play. Think about some of the early robots that helped build cars. Really, these are kind of mechanical tools that were helping build in heavy industries. I would fast-forward all the way into the 1990s. You started to see kind of AI agents, really rudimentary AI agents, start to pop up in certain places. I’ll go to an infamous example here, but if people remember Microsoft Word in the 1990s, you’d start typing, and this little character named Clippy would pop up on the left-hand side of your screen and say, “Are you writing a letter? Do you want help writing a letter?” It was not a very good AI agent.
And to get really kind of into the details of it, it used a thing called Bayesian probabilities where it was trying to kind of guess using probabilities of what you were doing and how it could help. So, if you wrote, “Dear _____,” it would guess you’re probably writing a letter and suggest from a list of pre-created suggestions that if you’re writing a letter, use this letter template on Microsoft Word. I think close to nobody used it. It disappeared from Microsoft Word about 10 years later.
Matthew Peck: It was very annoying. It was a paperclip and it was sort of a child telling me what to do. It was very intimidating and, yeah, eventually it X’d out. Clippy didn’t — he wasn’t long for this world.
Jay Jacobs: Not long for this world, and it was early. I think these themes take time. They take some fits and starts. You see some innovations that don’t quite work out. Clippy just wasn’t that smart and not that useful when it comes down to it, right? So, fast forward another decade or so, 2011, on the iPhone 4S, you saw Siri introduced into our phones. And so, suddenly, in almost all of our pockets, you had another type of AI living there, ready to help you with things that you wanted to do. Now, 2011 Siri was not the most sophisticated AI either. It uses this thing called convolutional neural networks and the idea behind it was there’s a little more depth here. It’s not just trying to guess what you’re saying, it’s not just trying to kind of lead you to a list of potential — a template like a letter template like Clippy was. You could interact with it and you could say, “Type me a message to my dad and say, ‘Hi, how are you doing?’ Send.”
In concept, this is actually a pretty big advancement in artificial intelligence. It was certainly not perfect. Siri’s still not perfect today. Anyone who has an accent, it tended not to work very well. If you tried to go off-script and say something a little looser, like, “Eh, send a text to my buddy, da-da-da,” like it would get confused pretty quickly. You had to be very direct about what words you used. But generally speaking, this was really a step up in the dimensions of artificial intelligence and the sophistication of it. So, 1996 Clippy. 2011 you go to Siri.
Let’s fast forward now to November 2022. This is the actual iPhone moment of artificial intelligence. I don’t mean literally an iPhone like Siri. I mean, more figuratively, this is when you saw a massive explosion of capabilities that everyone started to interact with, just like the iPhone was this massive acceleration of smartphone adoption. What happened in November 2022 was the release of ChatGPT. And this program, in a lot of ways, wasn’t necessarily the most state-of-the-art artificial intelligence, but it saw some of the most adoption that we’ve ever seen of any technological platform ever. By the time it came out in November 2022, it had already been about a year and a half old.
But the biggest change that happened with ChatGPT was chat. They introduced a chat box. And it meant that you didn’t have to be a software engineer to interact with this large language model. You just had to be able to type. And if you could type into this box and say, “Write a letter to my friend asking how he is,” you’d have a letter right there. And there’s a quote that any sufficiently advanced technology is indistinguishable from magic. ChatGPT was magic when that came out. It was unbelievable that you could suddenly have this thing produce really unique content.
Matthew Peck: Well, Jay, just to pause it because I do want to go back to that and that’s really interesting about the, maybe it’s funny, it’s dawning on me now, the whole idea of the chat aspect of it, frankly. But you had mentioned before and then you just mentioned it again, LLMs, large language models. What is an LLM? What is a large language model? Why was, I mean, yes, the chat box on the ChatGPT, but what’s this impact of LLM? Why are those initials so important?
Jay Jacobs: So, we’ll go back to Clippy, right? I use this kind of annoying term, Bayesian probabilities, which was how Clippy worked. It was basically trying to guess what you were doing, and if it guessed that you were writing a letter, it would show you a form for a letter. What large language models are doing, LLMs, is actually introducing the idea of context into it’s guessing. It’s still trying to guess what you want. It’s still trying to understand, like if you say, “I’m writing a letter to my friend,” how was the letter constructed? What do we think we know about this friend? What do you think you’re trying to get at here? But the context is what matters. And I’ll give a really simple example.
If you use Google Translate about 10 years ago, and you said, “Translate from English into French: The tanks are advancing down the battlefield.” Google had a 50/50 chance of getting that translation right. It would just try to guess what are you trying to say and it would replace every individual word with a guess in French. Now, the problem is there’s a couple of different words for tanks in French. The word chars is, and apologies for my pronunciation, but chars is what you would call a tank in the military sense. A reservoir is what you would call a tank in the holds a lot of water sense. And so, Google Translate had a 50/50 shot of getting that translation correct.
Now, with large language models, it’s taking context of what you’re asking and context within that sentence to develop new content. So, if you say the tanks are advancing down the battlefield, there’s a much higher probability if you’re saying battlefield that you mean chars and not reservoirs, right? So, just imagine that exploded into an extremely sophisticated model of probabilities and relationships across words. And an LLM now can create really convincing content based off of relatively simple inputs of what it thinks you’re asking it to do.
Matthew Peck: So, now, did LLMs exist before 2022? I mean, how long have LLMs been floating around?
Jay Jacobs: Yeah. So, there was kind of three conditions for this explosion to happen, this Cambrian explosion of artificial intelligence. One, we needed a lot of data, because at the end of the day, the way that these large language models are trained, just like you would train a human in how to write, they’re trained on data, consuming data across the internet, learning how we tend to write, what is the relationship between battlefield and chars versus reservoir and tank. And what we saw was from the year 2018 to 2020. We created more data than everything combined prior to 2018. So, in three years, more data than thousands of years of human history.
Matthew Peck: I’m glad I’m sitting down. I’m glad I’m sitting down, Jay. That’s a lot of data, that’s a short period of time, and that’s a lot of silly cat videos. I’m imagining 90% of that is silly cat videos is my guess, but please continue.
Jay Jacobs: And if we get into the conversation of our AI sentient, we’ll figure out what they find out about us soon, but probably cat lover is the first thing they’re going to think about us. Or they’re going to think cats are the dominant species of the world. So, I think of data as the fuel. If you want to create an analogy about a car and a car really needs three things, you need an engine, you need a driver, and you need fuel, there’s obviously more things, but let’s talk about it in those three ways. Data is the fuel, massive amounts of fuel in the last three years to power this thing.
Number two, the engine. We have more powerful chips than ever before. So, some of the chips coming out, you may have heard the term GPU, a graphics processing unit. These are the most powerful chips today for training artificial intelligence models. They’re about 600,000 times more powerful than the chips coming out 10 years ago.
Matthew Peck: Wow.
Jay Jacobs: These are extremely, it makes 10 years ago chips look like vacuum cleaner engines and today’s chips look like a rocket ship. They are completely different, incredible amounts of power for processing this explosion of data. Now, the third thing, we already mentioned it, was the large language models, this ability to develop relationships and contacts between words. This is the new driver. This is the new way of thinking about data and thinking about the creation of content. It’s not that it was invented on November 2022, but if you really kind of look in the early 2020s, this is when you start to see really big breakthroughs in large language models. But because of the data, because of the chips, because of the advancements in these models, you have this incredible moment of ChatGPT and kind of everything that’s followed.
Matthew Peck: And so, let me ask too about the one thing I was always unclear of and maybe just simple as the algo or the algorithm, but how was one large language model better than the next? I mean, you have ChatGPT, then you have Gemini, then you have Anthropic, you mentioned France. So, let’s talk about Mistral, right? All these are now AI companies with their own LLMs and it’s like, I mean, are they always like Coke and Pepsi, and is that all it is, is just taste? Or how are these guys competing to be the best model, or is that even the best way of putting it?
Jay Jacobs: It’s a wonderful question and in the reality, we are still very early days in this type of artificial intelligence and you see it kind of being a multi-horse race, right? There are certain models that are better at certain things than others. I don’t think there’s just one clear winner across the board. Think about all the things these models can do. They can go text to image. You can type in text, create a picture of me on a computer and it’ll create an image. You can do text-to-speech. You can do video to text. You can do coding. There’s all these different modes of an artificial intelligence large language model, and some are going to be a little bit better than others.
But I guess if we take a step back pretty similar to what were the ingredients for this large language model explosion tend to be the ingredients that make one model better than the other. One, who has the best and most data? One of the ways that we look at the complexity of these large language models, it’s still hard to say, is the number of parameters, which is really kind of the amount of data that it was trained on. And so, what you’re seeing actually from an investment perspective is some of the most interesting companies in the artificial intelligence space today aren’t the ones developing these models per se. It’s the companies that are sitting on really unique data sets that are suddenly becoming really valuable.
You may see some of these social media companies where people for free are posting these long typed-up really well-researched things and now that can be used to train a large language model. It goes from just being kind of a post to help selling advertisements on a social media platform to having a totally different use case of selling that data to a large language model to increase the number of parameters. So, data and the uniqueness of that data is becoming really valuable as an input.
The second thing in a resource-constrained world, which we are in, is access to digital infrastructure, everything from access to those GPUs I was talking about. There are not an infinite number of GPUs out there. There’s quite a backlog. Access to the real estate, where these GPUs are going to be hosted. AI doesn’t happen in thin air. There are buildings where you need to have cooling, you need to have security, you need to have a lot of other chips and computers around those GPUs to really kind of service this AI engine. Those are in short order today. And so, that’s a really important part of developing large language models is access to digital infrastructure.
And then even more simply, access to things like power. These GPUs are extremely power-hungry. For example, running a query, typing in a prompt into a large language model uses about 10 times the amount of power as typing something into Google for search. So, even access to power is becoming a competitive advantage for some of these large language model developers. So, if you have the right data, you have the access to the digital infrastructure, you have the power, and then of course you want brilliant software engineers developing these models, that’s kind of a recipe for success right now. And what you’re seeing…
Matthew Peck: But, Jay, let me interrupt because there are two things that I want to ask about. The first part is you mentioned about need the power and the warehousing and the CPU or, I’m sorry, GPU, I think is better way of putting it capability. Okay. So, DeepSeek. So, for all of our listeners, DeepSeek was a Chinese LLM, and you’re going to correct me because I might be all over the place here, was a Chinese LLM that reportedly used half the GPU capabilities. I mean, is that why that was such a sort of shot across the bow?
Jay Jacobs: So, one way to think about DeepSeek, it was really a derivative program from a lot of the work that the large language models in the United States have been building. So, an analogy for this to make a little more sense, imagine you’re a company, and this is going to sound like three decades old already but imagine you’re a company developing…
Matthew Peck: But you mentioned Clippy. So, Jay, you already mentioned Clippy. So, go ahead, pal.
Jay Jacobs: We’ve already taken a trip through history here. Imagine your company building an encyclopedia. You have to hire researchers. You have to collect a ton of data. You need to hire editors. You need to organize that. And you’re going to come out with a huge volume of books, I remember seeing on the shelves. You have books for every letter in the encyclopedia and like three books for S and other commonly used letters. That’s a really expensive endeavor to do, to build that first encyclopedia. Now, if someone comes in and says, “I read that encyclopedia. Actually, I read that encyclopedia like 500 times and I figured out that if you had a 500-page encyclopedia and I can distill it to about 20 pages and you’re going to keep most of the important information,” that’s what DeepSeek did.
It was not creating the encyclopedia. It was not developing the kind of original models here. It was in a lot of ways kind of condensing and making more efficient some of those models. So, it’s still an important piece of the AI pie. What it showed is you can develop really efficient models that require less compute. But in a lot of ways, it’s kind of a lower-end, lower-cost type of large language model.
Matthew Peck: Okay. So, it’s almost like it’s the, forgive me, Toyota, Toyota makes a good car, don’t get me wrong, but that’s almost like the Toyota, and then the ChatGPTs are the Ferraris, for lack of a better…
Jay Jacobs: I think that’s very fair. The end game here is not to have a Toyota or a Ferrari. The end game here is to have a rocket ship. And that rocket ship would be called AGI, artificial general intelligence. This is kind of the future of artificial intelligence where this program can do anything and everything, truly replicate a human from everything we can do. Large language model is great at creating content, but that’s not all humans do. We drive cars and make executive decisions. We raise our kids. Like, there’s a lot of nuance in what we do that goes far beyond just kind of typing or creating pictures. So, a lot of this billions of dollars of investment coming from some of the largest technology companies around the world today is not for the large language models we have today. It’s for the one, five, or ten years out that can really achieve AGI.
Matthew Peck: Well, before we get to the future because I do want to come back to what you just mentioned there, right, with their investments, and also you mentioned about the power that’s necessary and the investments that these companies are making. So, I mean, that’s something that is very current in regards to the concerns like Microsoft has been off, literally down, for the past years. At this time of this recording, for a full year, it’s actually lost value. I mean, I know there’s general concern of like, “Okay. When is this going to pay off? When is it going to be profitable?” Because, I mean, I’ve read that one query on ChatGPT uses all of the CPU, which is again, a very costly system to spit out where to go to Italy for a babymoon, right? So, I guess, how do you respond to when is this going to be profitable?
Jay Jacobs: Well, it depends what part of the value chain you’re looking at today. So, right now, it’s extremely profitable for the GPU makers. It’s quite profitable for the data centers, but we’re really kind of in this build stage of artificial intelligence. We think this is going to take a few more years. There’s going to have to be billions of dollars. We’ve already seen the Mag-7 stocks, the largest technology stocks in the United States, are committing north of $250 billion this year on CapEx, spending on digital infrastructure to support their AI ambitions. But this is the build-out. This is building the foundation of the house. It’s not necessarily ready for prime time yet.
Once we’ve done that huge build-out, we’ve seen these models get more and more powerful and sophisticated, then you can really start to see the revenue and profitability realized by these companies. If artificial intelligence makes you 50% more productive, companies are going to be willing to pay a lot of money for that artificial intelligence, right? If it makes you three times more productive, they’re going to pay way more money for that artificial intelligence. If it can cut some of the costs from a cost structure, a company becomes really valuable. And so, I think part of the calculus here is, one, how soon is adoption really going to happen where companies start to spend meaningful money to implement artificial intelligence across their business?
And the second piece is how wide-ranging is it? If AI can only write an email for you, that’s not that valuable, right? But if AI can discover new drugs, if AI can drive cars, if AI can write legal documents, suddenly these things become really powerful, really valuable, and cut across so many different industries that the investment is absolutely worth it for these companies. Even though it sounds like hundreds of billions of dollars is a massive expense, artificial intelligence could be the tool that’s running every industry going forward. And so, I think that’s how these companies are looking at it. It’s a lot of money, but they actually can’t afford to not invest in it given the total addressable market here.
Matthew Peck: Well, and I think that that’s the key word right there is total addressable market. Because, I mean, I’ve heard about how, at least currently, I’ve heard that it’s helping out a lot with customer service where, I guess, chatbots, I don’t even know if they still consider them chatbots, the little blurbs, but I’ve heard a lot of customer service companies or almost like, not telemarketing, but just more of these incoming calls, rather than going to a human, they would go to AI who would be able to figure out what that person is asking and get them a good response. And so, rather than hiring a human, whoever that may be, now the computers can handle it as an example.
And you’d also mentioned a little bit off air, I mean, how is BlackRock using AI right now? Because I can tell you how SHP is, I’m happy to share, but I want you to share what BlackRock is, how is this large, massive, one of the biggest investment firms using it currently?
Jay Jacobs: Absolutely. And the answer is a lot of ways. We’ve talked a lot about the productivity example, but every employee at BlackRock has Microsoft Copilot, which is kind of a derivative of ChatGPT on our desktop. So, we can use that to help write emails or schedule meetings or summarize documents. So, there’s absolutely the productivity component. There’s the investing component, using AI to become better investors. There are some really cool ways that our systematic investing team, think about math quants, are using artificial intelligence to understand what are CEOs saying in their earnings reports, what are news articles saying, what’s social media saying about these companies, something that no one analyst could do, but artificial intelligence can synthesize all this information really quickly and come up with an insight like, “This company looks pretty good,” or, “Uh-oh, there’s a lot of things saying this company is bad.”
So, we’re using AI in investing at BlackRock. I would say a third pillar is in our technology. So, BlackRock, and maybe a lot of people don’t realize this, is a technology company as well as an investment company. We build a lot of software that the finance world runs on, and we’re embedding AI into that software to make all the people that use the software more efficient in some ways kind of the chatbot way, but also in other ways as well. How can you get better at using our software? And then the fourth way is we’re providing access to artificial intelligence through our investments. So, giving people the ability to invest in the AI value chain, whether it’s AI stocks across the United States, whether it’s semiconductor stocks, whether it’s digital infrastructure, we’re creating funds for people to be able to access that as well.
Matthew Peck: Okay. So, a couple of different use cases that we’re talking about right now, both directly at BlackRock or as you and I mentioned, you guys mentioned, I did like the idea that when Microsoft used the term, Copilot, I thought that was a well-crafted name because it really is someone that’s sitting next to you and helping you be more productive and be more useful and faster.
Jay Jacobs: I was rooting for Clippy 2.0, but I think that got scrapped from the idea board.
Matthew Peck: Over the brand new people who came up with that to say for the future. But back to what we’re talking about because I want to talk a little bit about the future because it’s interesting I had not heard that term, AGI. So, I want to come back to AGI and I do want to talk about the future and also to address what’s both like silly and then every once in a while not that silly. I mean, you do have the doom and gloomers out there that say AI, I mean, I think even Musk at a certain point was just saying if we don’t put safeguards around this, it’s going to run rampant and take over the world and maybe that’s just all of our science fiction. I think Terminator 2, the AI launched nukes against us because the humans are the enemy. I mean, all those different tropes and narratives, etcetera. Let’s peer into the future a little bit. I know we’re predicting and I know it’s speculating but, I guess, how do you respond to the doom and gloomers? And then in your opinion, where will AI be heading?
Jay Jacobs: I’ll take those questions in reverse because maybe they kind of end up in the same point. What we’re seeing is this exponential growth of the complexity of these large language models and a lot of it has to do with AI scaling laws and I feel like I just vomited a lot of tech words in one sentence there. It’s a race to AGI, it’s a race to artificial general intelligence, this kind of singularity moment where AI is going to be as advanced as humans and can do all the things that humans can do. But the challenge with that is that the sophistication of these large language models is starting to kind of hit a logarithmic slope if you will. What I mean by that is it takes exponentially more data and exponentially more computing power for linear improvements in these models.
So, we already have pretty good models. To get to the next stage, you need an exponent more of data and you need an exponent more of computing power just for that kind of marginal increase. So, what that is driving is the massive amounts of investment in AI, diminishing returns on compute and data. What it’s also driving is we actually might have a shortage of data. I said from 2018 to 2020, we created more data than the history of the world. Before that, we’re starting to run to an outer limit of all the data that artificial intelligence has processed. The low-hanging fruit has been digested by these models. So, there’s an impetus to create even more data.
There’s an impetus to create synthetic data, which I think is probably the most mind-boggling idea of all of artificial intelligence, which is, can you have AI create data and then train itself on it? It’s like telling an AI, “Go write a textbook and learn from it.” But it actually does have value. Think about autonomous vehicles for a second. You could train autonomous vehicles by having all these drivers drive on a road and the AI is learning from it, right? It’s going to see you just parallel parked, you just kind of went into the other lane to get around the UPS driver. But what if you could do that even faster? Because one AI created a digital world, a virtual world, and another AI is the driver driving through that world, and the AI is learning from how that driver is driving through that world.
It’s creating synthetic data. Nothing’s actually happening. This is like two computers interacting with each other, but you can create a ton of data and you can throw different experiences at that AI driver through that virtual world and suddenly you’re creating all this synthetic data. So, extrapolate that to a lot of different fields and we can create data to train artificial intelligence.
Matthew Peck: Now, is that like the metaverse coming back? That was the big trend in all the heady days of the 2010s. Is that a version of that? Or is meta, the whole metaverse world, just don’t even worry about that anymore, that’s old news?
Jay Jacobs: This is where my brain starts to hurt. Like, I’m not totally sure. I think we’re in a bit of a metaverse winter. It was a theme that in many ways kind of got a lot of hype but didn’t see the amount of adoption that people were expecting. Are we going to see virtual worlds going forward? Probably, like we will see advancements there, but I think AI has really cut the pecking, kind of cut in the line of the area of focus for a lot of these mega technology companies around the world.
Matthew Peck: All right. I was just curious because I remember like I just didn’t know if it’s going to come back and be all retro and somehow the AI is going to help launch that.
Jay Jacobs: Where are we today? We have a portfolio manager at BlackRock, Tony Kim. He manages various AI-related funds and he likes to say GPT-4, which is the latest iteration of ChatGPT, is like a high schooler. And actually, if you ask it to write an essay, it kind of reads like a high school essay. It’s not groundbreaking, it’s not perfect, but it can summarize a book and have a thesis to it. The next stage will be college-level artificial intelligence, and I think that might be a couple of years away. We’ll get to master’s level as it ingests even more data and gets more sophisticated. A few more years after that, we’re going to get to Nobel Prize-winning amounts of intelligence in these models, maybe creating discoveries within these models that we haven’t conceived of.
And then finally, we’ll achieve AGI, where the intelligence of these models surpasses humans. It will take hundreds of billions, if not trillions of dollars to achieve that. It will take several years. It will take tons of data and advancements, but that’s the trajectory that we’re on right now. And the idea that we’re just in high school right now is kind of mind-blowing if you will.
Matthew Peck: Yeah. Okay. So, then AGI, I mean, maybe the future is just too cloudy to see. But I mean, is it the same something that’s running on our phones, like a version of Copilot and ChatGPT, or no, this is a standalone almost human?
Jay Jacobs: It could take a lot of different forms. It could be something that follows you around, right? It could be on your phone and then transfers to your car when you get in your car and starts driving you around and then shows up on your desktop at work as you start doing your job. How it materializes is up to the technology companies, it’s up to regulators, it’s up to corporations, how they want to integrate this. But if we fulfill this kind of AGI vision, it’s going to be everywhere.
Matthew Peck: That’s amazing. Now, and I would say too, because very often people will become, well, two things is funny, like blue-collar and white-collar in a sense of, okay, does that mean lost jobs? And I have seen a number of studies that say, okay, with every technology, yes, there are jobs lost but there’s actually more jobs gained with the implementation of new technology. I mean, would you agree or would you think that AI in that technology is similar to the past?
Jay Jacobs: Yeah, these things are actually pretty difficult to predict. The idea that the automated teller machine would make people working at bank branches, obviously, actually isn’t true, right? I’ve seen the stat that we have more people in bank branches since the invention of ATM, actually. The amount of people has increased. What it does is it changes the role of that person. So, giving you a stack of 20s is not really their highest value, but helping answer your questions, helping you navigate the complexities of the financial institution, that becomes very valuable, right? So, I think some jobs will evolve. Some jobs will change.
Yes, we’ve seen the percentage of, I forget the exact stat, but something like a third of the United States used to be involved in agriculture and now it’s in the low single digits. You do see that these macro trends do evolve over time. And it’ll make people maybe have the same job, but just be more productive at it. So, we’re kind of in wait-and-see. Again, a lot of people kind of draw an equivalence between AI today and having like an unpaid intern. And I think that’s pretty aligned with Tony Kim saying we’re at kind of high school level of artificial intelligence. Like, that would got to be your unpaid intern right now. But as it gets to master’s student, suddenly that’s someone who’s pretty effective at their job.
Matthew Peck: Yeah. I like that framework, and I really, really enjoyed that perspective. And I was going to say, Jay, I think we’ll sort of slow down now and try to have us process this, but I would certainly love for you to come back and find out where we are on that intern. But kind of on a personal question, so how did you get into this? I mean, were you always fascinated with technology or did you just kind of stumble about it and then next you know it took off from there? But I mean, how did you get in this field in the first place?
Jay Jacobs: I suppose a lot of things are coincidence, but I grew up in Northern California sort of early days of Silicon Valley. I was really interested in international studies as a college student and got my degree in that. And then I ended up in the fast-growing and exciting world of exchange-traded funds. And so, some combination of investing plus technology plus a lot of what’s happening in the geopolitics space right now have all kind of come together for my career. It’s been unbelievably interesting, I would say. And in some cases, we’ve been wrong.
When I started thinking about these themes several years ago, I go back to 2017-2018, it was before I was at BlackRock, but I was doing a lot of thematic investing, and where we thought a lot of the advancements would be was in robotics, automating more of human physical exertion. And actually, it turns out that robots are not really as advanced as where we see more of the services industry in artificial intelligence right now. And as we’ve been saying, large language models can write a pretty good essay and write pretty good emails and do pretty good research and summarize things for you. That’s more on the services space. In terms of physical implementation of artificial intelligence, robots still really struggle.
You still have, actually, one of my favorite things is they’ve created robots that try to fold laundry and they fail miserably. It’s actually one of the funniest things to watch because there’s something that we haven’t really figured out, but humans are really good at looking at a pile of clothing in a hamper and saying, “That’s a pair of pants. I’m going to fold it like a pair of pants.” When a robot looks at a jumbled mess of clothing, it doesn’t know how to identify what a thing is. It doesn’t know what a pair of pants is when it’s piled in there with a bunch of other stuff. And so, folding laundry has been one of these things that actually differentiates AI from humans. Not that I think we’re all going to have to fold laundry in the future. But robotics has really slowed down.
So, one of the things I’ve learned is these themes can grow and these themes can evolve and they can evolve in some unexpected ways. The divergence between kind of services and content in artificial intelligence have really taken off where the robotics leg of artificial intelligence has really in some ways kind of not slowed down, but it’s kind of hitting this peak. We’re really struggling to kind of breakthrough in robotics these days.
Matthew Peck: So, yes, I think it’s a perfect example again, Jay, of making sure that we continue on this conversation in the future and just continue to follow these trends. I mean, it’s the fact of it’s going to be so pervasive that it’s going to impact all of your lives. I mean, you mentioned geeks like us talk about total addressable market and that’s just a fancy term or an economic term for how many aspects of your lives it is going to impact and how it’s — and it’s here to stay, right? I mean, this isn’t something that like, “Oh, well, you’re going to have AI today and then tomorrow we’re no longer at, that was just a fad.” Like, no, this is going to be with us. Now, we don’t know what shape and form, like you mentioned, Jay, but we have to get used to it, we have to understand it, and we have to get comfortable with it.
And so, Jay, thanks so much for being on the show to help myself, help all of our listeners, get a better understanding of how we got here, like I mentioned, and then where we might end up going knowing that we can’t predict it by no means but the more we know about it, the more ready for it, the more we’ll be better off for it because we can hopefully use it to our advantage, be more productive, get the itinerary to Italy which I love that story or me personally reading these bylaws of this condo association which literally just talking about it hurts my head. So, again, really, really appreciate the time. And thanks for joining the show.
Jay Jacobs: It’s a pleasure to be here, and I’m sure in a year there’ll be a lot of updates to share, so looking forward to coming back.
Matthew Peck: Thanks so much, Jay. So, again, for all of our listeners, I really appreciate your time and energy. So, don’t hold your breath for any laundry folding. AI will not be able to do that anytime soon. So, you can rest assured they’ll be able to do that. In the meantime, thank you so much for listening to our show, and catch us next time.
Certain guides and content for publication were either co-authored or fully provided by third party marketing firms. SHP Financial utilizes third party marketing and public relation firms to assist in securing media appearances, for securing interviews, to provide suggested content for radio, for article placements, and other supporting services.
The content presented is for informational purposes only and is not intended as offering financial, tax, or legal advice, and should not be considered a solicitation for the purchase or sale of any security. Some of the informational content presented was prepared and provided by tMedia, LLC, while other content presented may be from outside sources believed to be providing accurate information. Regardless of source no representations or warranties as to the completeness or accuracy of any information presented is implied. tMedia, LLC is not affiliated with the Advisor, Advisor’s RIA, Broker-Dealer, or any state or SEC registered investment advisory firm. Before making any decisions you should consult a tax or legal professional to discuss your personal situation.Investment Advisory Services are offered through SHP Wealth Management LLC., an SEC registered investment advisor. Insurance sales are offered through SHP Financial, LLC. These are separate entities, Matthew Chapman Peck, CFP®, CIMA®, Derek Louis Gregoire, and Keith Winslow Ellis Jr. are independent licensed insurance agents, and Owners/Partners of an insurance agency, SHP Financial, LLC.. In addition, other supervised persons of SHP Wealth Management, LLC. are independent licensed insurance agents of SHP Financial, LLC. No statements made shall constitute tax, legal or accounting advice. You should consult your own legal or tax professional before investing. Both SHP Wealth Management, LLC. and SHP Financial, LLC. will offer clients advice and/or products from each entity. No client is under any obligation to purchase any insurance product.