What's beautiful about AI?
Most of us don't realize how much our lives are already governed by Artificial Intelligence. From airfare pricing to financial decisions, legal judgments, hiring decisions, and more, AI systems control an ever-expanding range of domains. This is especially astonishing considering that the first digital computer was built less than 80 years ago.
Since the creation of neural networks and the establishment of Artificial Intelligence as an academic discipline in the 1950s, AI has advanced through various phases, including the development of machine learning algorithms in the 1980s and 1990s, leading to the sophisticated systems we see today.
For some, this trajectory indicates inexorable progress. Futurist and transhumanist Ray Kurzweil, for instance, proclaimed that by 2040, "we'll be able to multiply human intelligence a billionfold. That will be a profound change that's singular in nature." The allure here is that we're going to become healthier, faster, smarter as we become more and more integrated with the technology we create, eventually transcending our physical and mental limitations.
By contrast, others such as physicist Stephen Hawking have issued dire warnings about the threats that the unregulated development of AI poses to human existence. “The development of full artificial intelligence could spell the end of the human race," he told the BBC in 2014. "It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
How do we assess the promises and pitfalls of AI?
To answer this question, I spoke with Brian Mullins, CEO of Mind Foundry, an Oxford University company operating at the intersection of innovation, research, and usability. Brian has been at the forefront of how technology can positively change people’s lives throughout his career. He has over a decade of experience in high-growth technology companies across industries such as Artificial Intelligence, Augmented Reality, and Robotics. He has been awarded over 100 patents, has been asked to testify as an expert before the U.S. Senate, received an Edison award for Industrial Design, was named one of the CNBC Disruptor 50, and was selected as one of Goldman Sachs “100 Most Intriguing Entrepreneurs." His company, Mind Foundry, aims to empower organizations with responsible AI built for high-stakes applications where decisions affect the lives of individuals or are made at the scale of populations.
In this final episode of Season 2, Brian and I discuss the evolution of AI in high-stakes applications, and his perspective on the beauty of AI in its simplicity and power to enhance human creativity. He emphasizes the importance of responsible AI, raises concerns around AI use in therapy, and explores the role of AI in economic sectors. He also dispels misconceptions about the purported intelligence of machine learning platforms. Distancing himself from both naive techno-optimists and alarmists, Brian allays fears (or shatters hopes, depending on your perspective) of the possibility of a super-intelligence or singularity.
Here are five key takeaways from our conversation:
- AI should be demystified so that we can enable thoughtful decision-making about AI. We must keep in mind that we're still dealing with machines; it's a stretch to call their capacities "intelligence."
- There is beauty in understanding and in continuous learning. When it comes to AI, pursuing such beauty can help us harness its power to enhance our creativity and solving impediments to our flourishing.
- Responsible AI use is crucial, especially in highly industries like insurance, where AI can be used to anticipate future risks and provide economic backstops.
- Collaboration between humans and AI can lead to powerful outcomes, leveraging the strengths of both. While there are concerns about AI, such as deepfakes, humans have the ability to adapt and to inoculate themselves against such abuses of technology.
- Expanding access to AI and making it more usable for a wider audience can help solve important problems in the world. At the same time, understanding the risks and limitations of AI is crucial to prevent major mistakes and ensure appropriate applications.
You can watch or listen to our conversation below. Please take a moment to subscribe and leave a review, since it helps get the word out about our show. An unedited transcript follows.
With this episode, we have come to the end of Season 2. If you've enjoyed the podcast, please consider supporting us to help make Season 3 happen.
Subscribe wherever you get your podcasts: iOS | Android | Spotify | RSS | Amazon | Podvine
Interview Transcript
Brandon: Hey, Brian. Thank you so much for joining us on the podcast. It's so great to have you.
Brian: Brandon, thank you. It's great to be here.
Brandon: Yeah, wonderful. So one of the things I typically like to ask people these days about beauty is, could you think of an experience from your childhood that still lingers with you? An experience of profound beauty? It doesn't have to be the very first experience you had or anything. But anything that that word evokes for you? A memory? A sensation? What stays with you?
Brian: What a great question. Because there are so many types of beauty and so many impressions of beauty I have from childhood. It's interesting. I perhaps think a little bit differently about it. The example that comes to mind isn't going to be the typical moving aesthetic beauty. But rather, I remember having this observation and asking my dad when we got a new couch. How come couches fit through doors?
Brandon: Wow.
Brian: I asked him. Did somebody plan it, and is there a rule that they have to? He explained to me how that, you know, people who make couches, they just understand that there are doors involved and that it would be good for them to make the couches in a way that they fit through doors. And that there was this implicit understanding in the human condition of the customers of the couches. As strange as that was, that kind of geometric relationship of that understanding of the human experience is one of the most profoundly beautiful things that I remember when I was very young. Also, I got to mention an interaction with my dad who I love deeply.
Brandon: Yeah, wow. There's a profundity to that, right? Because it's important combination of human ingenuity in terms of what we're building but also just the affordances of our environments and the importance of designing in a way that doesn't simply solve the problem of where do I sit. But then, how is it going to be positioned in the environment in which I'm actually going to live? I think a lot of design tends to neglect that seemingly common sensical set of considerations.
Brian: Yeah, I know. It's very true. And yet, when it gets a right or when things conspired, put those together the geometry, that kind of shape of the world, it takes on a beautiful form.
Brandon: Yeah, fantastic. Brian, you were named one of the CNBC Disruptor 50 and one of Goldman Sachs' 100 Most Intriguing Entrepreneurs. I'm curious to know whether you would use words like intriguing and disruptive to describe yourself. And if so, in what sense?
Brian: I'm not sure that I would. It wouldn't necessarily be my go-to, though. I do like disruption, generally speaking. I'm a very curious person. So I guess if I was intriguing, I think that's a wonderful thing. I like disruption because it speaks to breaking established patterns. I think it can be a very positive thing. I think the word doesn't hold a positive or a negative connotation inherently for me. The idea that you could disrupt patterns that were limiting yourself or patterns that were limiting groups of people, I think that's an awesome thing. And so I was honored to get those acknowledgments. I think they're pretty, pretty rad.
Brandon: I want to ask you a little bit more later about what those disruptions look like. Tell us a bit about your childhood and where you grew up, and how you got into both the tech space as well as entrepreneurship. What drew you in that direction?
Brian: I grew up in the United States in Southern California. I went to college at the United States Merchant Marine Academy at Kings Point in New York. I studied engineering. I was an engineering officer in the United States Naval Reserve and later got my start in venture-backed technology.
Brandon: Well, sorry. What led you to the Navy?
Brian: The academy.
Brandon: Okay.
Brian: It's interesting. It's an amazing school. It's one of the five federal service academies in the US. By default, all of the graduates have a commission in the US Navy. And though it has a unique position amongst service academies where I had classmates that went to all of the federal services, which was pretty awesome.
Brandon: Okay. And so from there, you were in engineering. Then how did you decide to move into entrepreneurship?
Brian: So, you know, I actually was working for the government initially and ended up at the Space and Naval Warfare Lab in San Diego, California. Then I was employed by Booz Allen Hamilton, the consultancy I'm working in a US government lab. It was really, really an interesting time for me because a lot of really smart people are trying to solve really hard problems. I use the term cross-pollinate ideas. You get to see people that are solving problems with the hardware, or software, or sensors, or logistics, or statistics and everything in between because of just the immensity of the operations that are necessary to support something like the US Navy. And you get to work with people that are — some of the brightest minds in those fields. I learned quite a bit and really wanted to take that.
I ended up starting a business with some friends. It was an engineering services business where we worked with automation and robotics. I actually had a very good time with my career, where I got to go to different factories where different things were made, and troubleshoot automation and robotics that was not working. So things I've never seen before and processes I've never seen before, but I knew the automation and the systems and the robotic systems. That was amazing to me because I got to see how was food made industrially, how our cosmetic is made, how our automobile is made, how our pharmaceutical is made, and just about everything in between. It's just fascinating to see human ingenuity and where ideas are shared and then cross pollinated. You have some of these amazing effects. Then sometimes you have these baffling things where a food factory and a pharmaceutical factory use the same sterile stainless-steel component, but they don't know it. One of them pays hundreds of dollars, and one of them pays hundreds of thousands of dollars for it because they don't cross pollinate. It's fascinating to see the disconnect and how those inefficiencies occur as well.
Brandon: So how did you leverage that? Specifically, what was your first entrepreneurial initiative and how did that go?
Brian: So then I think I went from there. I had encountered a technology called augmented reality in my gaze at the Space and Naval Warfare Lab. This was where you could take — at the time, it wasn't wearable or anything. You could take a video feed of the world. So it was like looking through a camera, and you could essentially put in special effects in real time. I remember the example that I saw was, there was this diorama of a city. So it was like a model of a city on a tabletop. Someone held the camera and kind of flew it around its tree level. Then they put it in digitally into the video in real time a bunch of these zombies walking around. It was a little bit of a gimmick, but I immediately saw the power of it. I said, this is like — it was like special effects in real time. And if you could get it closer to the eyes, so you can get it so people can see it when you're looking around. Can you imagine how much faster you can bring information and, more importantly, knowledge to people wherever they're at, whatever they were doing? But this was way ahead of its time, probably in the late '90s or early 2000s. And so it took a while for the technology to evolve.
I remember in one year — it was on my birthday — I went to the set of talks in Los Angeles with my wife just for a special evening out. One of the talks was somebody talking about doing augmented reality on a mobile phone. It was the first time I'd seen somebody moved it into that form factor. I thought, oh, my gosh. Now is the time. If you can do it on a phone, you can do it somewhere else. That was probably in 2010. It was very early, and so we're still way ahead of its time. But that's actually how I got into venture-backed technologies. That passion for how technology could empower people really drove me to start my first venture and learn the world of venture-backed technology. That ultimately led me to where I'm at today, which is in Oxford as the CEO of Mind Foundry which is an artificial intelligence company.
Brandon: Tell us about Mind Foundry. How did that get started, and what do you all do?
Brian: Mind Foundry was started by Professor Stephen Roberts and Professor Michael Osborne from the Machine Learning Research Group at the University of Oxford. It was spun out in order to create AI for high-stakes applications, where decisions would affect people's lives or would be made at the scale of populations in a way that made them a safety critical application. You couldn't do just off-the-shelf AI where, all too often, we're seduced by the speed and performance of AI without knowing what's the fundamental method of action, what's going on under the hood. You actually had to do AI differently. You had to approach it like a safety critical system. You had to develop specific methods that would give you insight into how it operated, how a decision was made and what the potential for failures would be. So that whether the environment was regulated or had specific governance requirements, that you could understand what the impact would be on people's lives.
When I heard that the two professors who were famous for their work academically in machine learning, they were looking for a CEO to scale that business, it took me about 30 seconds to want to be a part of it. When I met them, they're super humble. Even if you're not super close to AI as an industry, most people have read The Economist's Future of Work series about how AI will take work or transform the workplace, which is based on Professor Osborne's work that he did. Professor Roberts has been a professor for about 30 years and has, I think, over 100,000 citations. Just amazing, amazing, man. It's a pleasure to work with the both of them and the team that they've created.
Brandon: Are there particular kinds of — well, maybe for those in our audience who are not too familiar with AI, I mean everyone has heard of ChatGPT probably by now, but could you tell us a bit about specifically how the systems you're working with operate? What exactly are you doing, and then what particular problems are you solving?
Brian: Yeah, I think that's an important set of questions people should ask about AI more broadly. I think we're very comfortable with people calling the industry AI, right? It's a super category. Everything in it is artificial intelligence, but it's really made up of a lot of things. Technically, most of those things are probably best characterized as machine learning, even the examples that you mentioned. What AI is, it's probably a topic of great debate. But within the methods that are available in AI, some are much more understandable than others. And some allow you to do things more quickly but at the cost of understanding. And so there's always a balance in, okay, how do we innovate and then how do we make it robust for use in the real world? There's a tension between those things, but I don't think that there's a conflict between them. We don't think there is either.
And so we work across the spectrum of technologies where innovation needs to occur. But we have a greater level of requirement in understanding throughout the pipeline of an AI or machine learning application. So what does the data represent, and how is it collected? What are the model types that are most appropriate? When that's the case, what are they capable of? And then more importantly, as you get to the other side of the pipeline, what decisions are going to be made based upon the recommendations of the system? What interventions might be taken? What are the direction of those interventions? How can we combine those things together in a way that limits risks, allows the automated systems to do what they do best but allows the people in actually relies on collaborating with people in a way that make both together able to accomplish more than they would be able to alone, which is pretty, pretty powerful.
We've seen that with all kinds of AI applications. I think one of the early examples that it still is evolving to this day is how chess is played in the world with AI. Now, if we're all honest, I think very simple chess programs beat most of us. And so the day that AI beat a grandmaster, it seems like a lot. But it was already basic computer programs were better than most of us. A lot of people predicted from that the death of playing chess or certainly the sport of professional chess. It just wasn't the case. Not only does it still go on. There's also a hybrid chess where you can team up with an AI, or you can play alone. Or, you can have an AI play alone and people doing all those configurations. I think we're currently in a period where AI-only teams are winning most of the time. But for most of the time since AI came around, actually, human and AI teaming teams together were much more powerful. And so there's these kind of unexpected dimensions where, okay, the AI is powerful. It can beat a grandmaster. But then, it turns out you could take a high school-level player and put them together with an AI, and they could outperform any grandmaster in the world. And it's because you can leverage the strengths of both.
Then there was a great anecdote, I think, from the last 5 or 10 years where a couple of guys were playing, who were not chess players at all, and they weren't AI players either. They just had this crazy theory that, if they made some moves every once in a while that were unexpected, and then the AI could do what it did, that the opposition could never really kind of get a lock on what they were doing. They were right. They won the championship multiple years for the hybrid competition. So it just really, really shows that as the technology is evolving, we don't really understand the implication. We don't know what's going to replace humans entirely and what is going to benefit from the collaboration together.
I will go on record with saying the idea that we're close to a super intelligence emerging from the technology and it superstitiously becoming self-aware. It feels very superstitious from where we sit. There's nothing in the method of operation of these systems that is persistent beyond a single request, where consciousness could emerge even if we thought that the mechanisms and the models at all were capable of doing so. It's the complexity that's representing in these systems, while impressive from an application standpoint, it doesn't even approach the complexity that's found in what is necessary for human consciousness. I think we really need to temper our expectations.
It's exciting to see and sometimes scary when AI do things that only humans had previously done. But it's also a little bit of a gimmick. I think one kind of clear illustration of that is the architectures that are breaking new ground today, these transformer-based architectures that are doing cool things with text that scares us and makes us think, is it self-aware? Or not totally dissimilar to the same type of generative architectures for images. But when we see an image generator make a picture, especially if it's got 20 fingers, none of us are afraid that that model is even remotely close to being self-aware, and it's going to take over the planet. For some reason, when we put text in it, it scares us. I think it's because that's something that's been uniquely human. We haven't gone through the uncanny valley of looking at graphics and computer-based images and gotten used to the synthetic nature of images and video. That's not to say that it's not powerful. It's not to say that there's not risks in how you apply it, and where you use it, and the risks that will occur economically and how things are transformed. But my personal hard stance is that we're nowhere near super intelligence. And if that's something you're afraid of, don't worry about it. If it's something that you're really looking forward to, I'm sorry.
Brandon: Let's just talk a bit about that. Because there are people who, I think, find it part of the beauty as they find in AI is the kind of seduction of this idea of singularity or of being able to somehow transcend our humanity. Then you have post-humanism and trans-humanism and various other kinds of ideologies that I think are seeing in AI the potential to transcend all of our human limitations and in technology more generally, like whether it's the ability to live forever or the ability to have or do some kind of super intelligence that can mitigate all of the challenges that we face. Whether or not that intelligence will turn against us is a different story. But what assumptions about intelligence are being made there do you see? Why do you think we can't get there or at least it's not without any kind of immediate scope in terms of arriving at that kind of a future?
Brian: I think that's the question of our day. You would have to first define what's intelligence. If we're trying to make artificial intelligence, what is intelligence? One of the common metaphors is, do submarines swim? No, but they're very good at going through the water which is what you want a submarine to do. That's the area that we're at with AI and machine learning. What they're doing is very good at going through the water, but it's not swimming as efficient. It's not experiencing what a fish is experiencing. I think if you look at just the complexity of the human brain and the entire organic system around it just from an information standpoint, the complexity of every cell in our body that's contributing to the conversations and the motivations, and what's going on to underpin that conscious experience, it is significantly greater than anything we've been able to abstract into a computer.
And if you just do basic back-of-the-napkin math, the entire corpus of all written documents that would be used to train large language model is less information than the average four-year-old child would see with their eyes throughout their life. And so just from this, a magnitude of training data in order to get — if you believe that the human experience was purely a brute force neural network, that all I needed was enough data to make sense of the world, we're nowhere near the complexity that the average human does at the age of four. And, yes, if it's in written text form, there's more text than any one human could experience in their life. But there's a lot more to understanding the world and the human condition than just the text documents.
You probably, even in that brute force configuration, you need to live as a human in human environments. Even to make that form of brute force learn human consciousness viable from an information at compute standpoint, I don't think it adds up. That doesn't mean that we're not going to be able to make really powerful machines that do awesome stuff. There are risks associated with them as much as there are opportunities. That's the thought of what I mean. It's just when you go and step further and you say, oh, man, the next version on the current trajectory of development on this curve could be the singularity, it doesn't seem like that's actually realistic. Are we on an amazing hockey stick up curve in technology and innovation? Yes, but that curve is defined by an equation. That if I zoom in or zoom out, the curve looks the same no matter where you're at on it. And so whether you realize in the macro or the micro, yes, things are changing quickly. But it doesn't mean that there's still not a lot of development that is still far away from the point that we're at. That is where we are.
Brandon: Talk a little bit about the work you all have done at Mind Foundry? What do you find beautiful about it? Where do you encounter beauty in the work that you're doing?
Brian: Just like to talk about AI, we'd have to define what intelligence is. What is beauty? I think it's an important question. If you think about it, there's a framework. I'm sure the first time I heard it, and I'm not going to be able to attribute it correctly. I'll just say it's not mine, but it makes sense. And that is that there are three natures of beauty. There's the first which is the aesthetic beauty that you would expect when you think about beauty maybe intuitively in English language — perfect proportions, perfect representation of function and form, beautiful person, a beautiful piece of art.
But then, there's a second nature which is the beauty of simplicity. This is where you have this elegant mathematical equation that describes some fundamental force. It's beautiful because how simple it is. But the problem with this type of beauty is that sometimes it's a mirage. Sometimes the simplicity is beautiful but the world has complexity in it. That leads to the third type of beauty which is the beauty of true understanding, understanding the truth of the universe.
Brandon: Brian, that's our study. That's what we studied. That's the work we did on scientists, and that's what we found. So yeah, those three types.
Brian: Oh, is that directly from your study?
Brandon: Yeah.
Brian: So you can source it for me back. Apparently, you did such a good job that somebody has told me the story.
Brandon: Of course.
Brian: I was not able to attribute it back, and I was explaining it to you. So hopefully, I did a good job of explaining it.
Brandon: Yeah, that's exactly what we found. Yeah.
Brian: I really found it to be a useful framework for me to understand the stages and what I see in our work is actually beauty and each of those kinds. Sometimes it's related to the problem that's solved. I think it's very aesthetic. It's kind of the beauty of a moral action. When you solve a problem, and you're doing the right thing or making life better for someone, it's very beautiful.
I think the second one, the simplicity, is the risk, especially when we're innovating. We've done some just amazing innovation. I'm just blown away everyday by the team that I work with and the things that we come up with. I think simplicity is sometimes a mirage. Sometimes it's a good direction finder. We've got this concept that we talk about a lot at Mind Foundry which is simplexity, which is where we live in a complex world. Complexity has emergence. It has all of these characteristics that lead to what people might refer to as things that are chaotic or that have emergent properties. And so simplicity isn't always enough to define those things. But this idea of simplexity is, can I see what is constraining them as the complexity as emerging in this system? Then that's a very good direction finder for a greater level of understanding.
Then finally, I think one of the things I like a lot about the team I work with is the willingness to learn new things that challenge what we accepted as true before. That's the beauty of understanding. Sometimes you understand that you've been working for a long time under a bad set of assumptions, and that you have to change them. Or, you won't be able to afford them anymore. That's not a loss. That's not mistake. That's a win. We got understanding. It's like, this is what it's all about. It's actually getting that deeper level of understanding and that we only lose when we stop trying to get that.
Brandon: Do you have an experience or a moment that comes to mind where you could recognize this kind of beauty, maybe in an accomplishment that you're especially proud of at Mind Foundry?
Brian: One of the stories I like to tell is the collaboration that our founders and some of our team members were a part of with the university, with a larger group, that was exploring how AI machine learning could be applied to interventions in mosquitoes in the developing world. Specifically, with regards to malaria. It's just a really inspiring project. Again, it was something that we collaborated on with a lot of brilliant people from foundations, from the university. We're just honored to be a part of it.
It was to listen to mosquitoes with a cell phone and tell if they're carrying the malaria parasite. It was at the cutting edge of modeling, specifically with regards to sensors, which is an area that Mind Foundry focuses on. But applied it in something that just had sweeping applications. It's really beautiful to see what is the most cutting-edge technology. That's normally, like it or not, associated with things like mobile apps and games and an ad tech applied to a real human problem that systemic and has been plaguing humanity quite literally since humans had civilization. If people knew the history of malaria and how many parts of the world that you couldn't live in that you can live in now, it's definitely still a part of our story. To see new technology applied to interventions in the real world with AI is very exciting.
Brandon: Yeah, wow. That's fantastic. It's absolutely, I think, a growing problem, right? I mean, what we hear with climate change and the implications of the spread of mosquitoes and so on, it becomes really critical.
Brian: Yeah, there's also a really interesting piece of understanding in that project that stood out for me. It actually honestly blew my mind. The models are able to listen to the sound of the mosquitoes and tell you what species of mosquito it is, what gender of mosquito it is, and then whether or not the wing pattern is laden with parasites, which is pretty amazing. But the profound thing was, we actually found that amongst the volunteers, some people could do it. And that was amazing. And so, oftentimes, these models are picking up on signals and things in the training data when you let them work with people. Even though there's uncertainty about whether a person is capable of that, if the label that they made was right or not. But then, when the models are able to make predictions, they're also able to tell you. Hey, it turns out that this expert here gets it right all the time. That's this kind of underlying discovery about what humans are capable of in the process. I love that part of the story as well.
Brandon: What do you think AI could contribute to in terms of enhancing our humanity, our sense of — because we talked a little bit about risks. But there a lot of people worried that AI will somehow dehumanize us and strip away from us or rob us of certain things that we consider uniquely human, that make us uniquely human. What do you think is the potential of AI to contribute to our humanity and to our sense of who we are?
Brian: Yeah, I think that AI itself is just — I don't know. When I explain to people, I sometimes refer to it as statistical compression. You're just taking a lot of knowledge and trying to compress it into this model that you can then use to make decisions quickly about something or use to make recommendations quickly about something. I think in that understanding, it stops being this alien intelligence that maybe I'm afraid of. Maybe it's dehumanizing me. It just becomes a tool. But it's a tool that's inspired by the creativity and work of lots of people. I think that's powerful. It's a way that we're finding how do we amplify and make available creativity to more people in a way that maybe wasn't possible before. That's going to be a sticky mess. Because there are artists that have made these things that was trained on. That way of making a living is jeopardized.
But on the flip side, maybe we should question our ideas about trademarks and intellectual property. Is there a value in making it so that anyone can create profoundly beautiful things and that the ideas are amplified, especially individual ideas? I think it's worth exploring. I don't think you just judge it as good or bad. The example that I would give would be, there's a lot of talk about things like deep fakes, these videos that are going to put people in compromising positions or situations. Maybe influence the way people think about people. I don't buy it. I mean, it's real. Of course, it happens all the time. You can encounter them all over the place. I think humans do a good job of inoculating themselves against new exploits.
I don't think we're as bad as we look from the outside. I think humans pick up on these patterns and manipulation within a short amount of time. You go from not being able to resist clicking on somebody's top seven list of things you need to click on. Number four will make you do it. Then we laugh because we're totally immune to it. But when it came out, we have to all admit that that was effective, and we all clicked on a lot of lists that wasted a bunch of time. I think it's the same with deep fakes. We're not going to be as manipulatable as we think. In fact, with special effects and a budget, state actors, bad actors, they've been able to do this for a long time. Decades, right? You can do special effects, put people in compromising pictures or videos. We're just using AI to automate the special effects pipeline.
Now, what's the good side of that? I think let's look at making movies. Individual creators, people who make videos on platforms like YouTube. What if they could — with an idea, with a script, with their personal creativity — make a video with a production value of a studio movie? Then what matters is this incredible storytelling ability of an individual, not that you are the equivalent of a bank for moviemaking that gatekeeps what ideas get out. I don't know. I can't see that being a downside if we can let individuals tell stories with this, again, maybe profound aesthetic beauty that complements the storytelling and what they're able to accomplish as a creator.
Brandon: Yeah, well, that's a good example of disruption, I suppose. I know we talked about this earlier. But are there ways in which Mind Foundry is disrupting other industries, or is that something you're aspiring to do in the future? What does the next few years look like for you? What kinds of things are you trying to do? Are there other sectors in which you are trying to intervene and then to disrupt in some way?
Brian: Yeah, I think a lot of what we do is around helping our customers that work in high-stakes applications disrupt the old way of doing things, and maybe disrupt the market of options that they have available to them. Commercially, we work with insurance companies where it's about 8% of GDP worldwide is insurance. Their role is to anticipate future risk and be the economic backstop for those uncertain times and things that would potentially destabilize market societies.
I can't imagine a city in a world, a developed city, that manufactures all its goods, its foods, its medicines, what it needs to survive. Right? And if you had a situation where a truck broke down and the insurance policy didn't help you repair it, you lose the truck capacity. Well, it cascades. Then next truck, it goes down. There's less, and then there's less, and then there's less. In many parts of the world, trucks are small businesses or independent, individual operators. If they couldn't get their truck back online, you actually lose capacity to deliver goods. And at some point, you have to make the decision who's not going to get food or what don't we have in the city. And if you didn't have that backstop to risk, things can get really bad. Especially, you mentioned there's lots of things in the world that are changing and changing dramatically, whether it's climate-related, geopolitically-related. And if those destabilize things in a way where the systems and economies that people rely on to live in the modern age, without insurance, we don't have those things.
Okay. Why do I say all that? Because insurance is really important in all of our lives, whether we know it or not. We may be thinking about car insurance. We may be thinking about health insurance. Like where I grew up, that's how you get basic health care. Very important. And because it's so important, it's highly regulated. And for them to be able to use AI, they need to be able to prove that they can do it responsibly. That it's not making decisions based on protected characteristics. That is not being flippant about the decisions in a way that will negatively impact people's lives. Because you accidentally accuse someone of fraud that isn't fraudulent. It could destroy someone's life.
And so, AI that can work within the requirements of those government industries is critical. You can't just use off-the-shelf AI that people are using for chatbots or image generators. You have to build it in a way that considers the regulatory requirements and considers the impact on humans. That's what we do. We make options for our customers in our platform where they can use AI in these highly-regulated environments. They can adapt to the changes in the world. They can adapt their business model in a way where they can be profitable and provide that service for as many customers as possible.
Brandon: That's great. Brian, I know you're not afraid of the singularity happening anytime soon. But are there any fears or concerns that you have about AI and about how these technologies are evolving?
Brian: Yeah, I know 100% there's lots of things that we should be concerned with. I think the most dangerous decision in AI is when people make decisions on what to do with it. I think it's why my first instinct is to try to change people's thinking about AI as some kind of alien intelligence. You mystify it, and then you take away your ability to assess what you should or shouldn't do with it. You back it up and remember it's a machine doing something, right? You can make a Turing machine out of mechanical gears. And if you were running that AI model on that mechanical steam piston-driven machine, would you be afraid of it becoming self-aware? No. And so, I think with that in mind, what we should be concerned about is, what ways does it go wrong? And is that compatible with what you're trying to do with it?
I see a lot of people who get excited and they say, oh, you know what's expensive in therapy? It's the time of the people doing therapy. And if the AI could do therapy for more people, then more people who are disenfranchised can get therapy and have better outcomes in their life. Yeah, I know that sounds great. That sounds great. But how do AI chatbots go off the rails? They go off the rails pretty hard sometimes, right? Especially in the context of a therapeutic conversation, can you keep them from encouraging bad behavior? I don't think we know. In fact, I might go out on a limb and say we know that you can't. So there needs to be this balance between understanding how they work, how they can go wrong, and whether they're applicable for those applications. Or else, I think we're going to have some big mistakes, right? It's going to be the AI equivalent of Three Mile Island where we've got an accident. Unfortunately, Three Mile Island wasn't as bad as it could have been. Is it going to be the same though if you've got a therapy bot that makes a mistake, and you got a million customers? All of a sudden, they're all having some pretty dramatically bad experiences. I don't know. Maybe we can learn how to measure and quantify the risk in those systems in a way that can deliver what you want. But I think we need to have a healthy dose of skepticism. And so, yes, we should solve big problems. We need to understand how much understanding we need in order to apply AI in those cases.
Likewise, I think broadly speaking, it is worth considering economic displacement where AI is going to do more and more jobs. Which ones do we want it to do? I think it's fair to say, a decade or two ago, we all thought AI would definitely be pouring concrete and making brick walls and doing all this stuff we don't want to do. And we'd be making art and poetry. And it turns out, it's exactly the opposite. That's not necessarily what we want life to be. And so just because those are exciting applications, and those are the ones that have been easy to progress on — I don't want to say easy to undermine the mountain of work that amazing groups of people have done to get where they're at. But easier than, say, an AI that can do plumbing or concrete or make a brick wall. Maybe both economically and as a society, we should think about making AI that's good at doing stuff that we might not want to do, or helping people that do that do the creative part of it and not the part that's more dangerous or more repetitive, versus the stuff we'd like to do, right?
Brandon: Yeah, I know. Exactly. Yeah, that's right. What's on the horizon for you? What are you looking forward to the next few years? Are there projects or ideas that you'd like to see, to help realize?
Brian: I am excited about a lot of things with AI. I won't go into too much detail about some exciting new technologies that we've been working on at Mind Foundry. But I will say the excitement I have is around usability and user experience, and how do we open it up so that more people can do things with AI that maybe they hadn't thought was possible before? I don't just mean being able to go into a text-based interaction. There's great things you could do with a ChatGPT or great things you could do with generative images. But there's a lot of other types of AI that are extremely powerful that most people aren't even aware of, let alone have access to. And how do we put that into the hands of more people to solve more of the important problems in the world? That's what we're working on. That's one of the things we're working on at Mind Foundry that's pretty exciting to me.
Brandon: Can you give us a hint as to what are those sorts of things? Is it too classified?
Brian: I think I'll just tease what I've already teased. But it's some pretty, pretty exciting stuff.
Brandon: Great. Where can we direct our viewers and listeners to learn more about your work?
Brian: Well, I think that if you're interested in the AI that we do, mindfoundry.ai. That's probably the best place to get a hold of us. I'm @mrlaserbeam on X if you want to see my not so prolific tweets. But every once in a while, maybe that's a good one.
Brandon: Okay. Awesome. Anything else you want to add, Brian, on this theme of beauty, of both the beauty and perhaps the allure of AI that we've been chatting about? Anything else you want to add to it?
Brian: I think that it was a pleasure to talk about all the things that we did. We covered a pretty significant amount of it. I think the work that I've mentioned that people have done to get image-generating models or large language models to work, that's something. We have a ton of respect to all the breakthroughs that are done across the industry. There are so many people working on it. There are so many different types of AI. And I think that's a beautiful thing, too. I think you've got these amazing groups of passionate people that really want to make an impact. They really want to make a difference. I think that's a testament to all of the progress that we're seeing in the field and continue to see after a decade.
Brandon: Fantastic. Brian, thanks so much. It's been such a pleasure. I really enjoyed learning about you and the work you're doing.
Brian: It's been a pleasure. Thanks for having me.
If you found this post valuable, please share it. Also please consider supporting this project as a paid subscriber to support the costs associated with this work. You'll receive early access to content and exclusive members-only posts.