Lux Capital’s Josh Wolfe on Investing in Robotics


Over the last year or so, probably every venture capitalist has gotten interested in artificial intelligence. So people are still figuring out what types of business models actually work, and who will end up making money in the space. Josh Wolfe has been at it for a long time. As a co-founder and managing partner at Lux Capital, he's been involved in a number of deals in the space, and is already looking at what's next after the wave of excitement for chatbots since ChatGPT was released. On this episode, we talk to Josh about what he's excited about right now, including robotics, biotech, and maintenance. He tells us that just as ChatGPT opened everyone's eyes to the power of chatbots, a similar moment is coming in the robotics space. This transcript has been lightly edited for clarity.

Key insights from the pod:
Use of generative AI so far — 4:10
Burgeoning failures in chatbots — 7:04
Robotics models versus text generation models — 9:24
Key man risk in robotics — 13:19
Open source and robotics — 17:11
Due diligence on AI — 21:02
Robotic arms — 23:41
Transfer learning for robots — 26:21
Other types of robotic learning — 28:02
Incumbents and developing robots — 33:18
Moats on chatbots — 35:50
Energy use and AI investing — 40:56
Expanding the context window — 44:15
---

Joe Weisenthal (00:20):
Hello and welcome to another episode of the Odd Lots podcast. I'm Joe Weisenthal.

Tracy Alloway (00:25):
And I'm Tracy Alloway.

Joe (00:26):
Tracy, let's talk about AI some more.

Tracy (00:29):
Okay. Well we could just have AI write the script for us and save ourselves some time.

Joe (00:34):
No, I don't think the technology is there yet. You know what? So I can't say who, I was talking to a professor recently and she said something really interesting to me. I'm not sure [I can say], it's fine, I think I

Tracy (00:46):
Do it.

Joe (00:47):
And she said, there's all this anxiety about kids cheating on their essays or having ChatGPT write their essays for them and supposedly professors are tearing their hair out trying to figure out what to do about this.

And the AI detectors don't really work all that well particularly, but apparently it seems like the solution is to just grade them as regular essays no matter what. And it sounds like at this point all the ChatGPT essays are basically solid C essays and so if you just sort of take them at face value, even if you think they might be AI-generated, at least at this point, it doesn't seem like a way, at least for college students to write good essays yet.

Tracy (01:25):
So our baseline, our average, is ChatGPT now?

Joe (01:30):
Yeah, that's basically it, can you beat the bot?

Tracy (01:32):
Did you see the thing someone was tweeting about? One of the tells for AI-generated words is if you use the word ‘delve.’

Joe (01:40):
Yes, I saw that.

Tracy (01:41):
Which, as someone who I'm sure has used ‘delve’ numerous times on this podcast and in my writing I thought was a little unfair.

Joe (01:48):
It is a little unfair.

Tracy (01:49):
It’s a cliché, but that doesn't mean it's from AI.

Joe (01:52):
All that being said, there's more to AI than just ChatGPT obviously and chatbots -- and this sort of has come up on a couple episodes recently, but only very tangentially -- people talk about the use of AI and industrial applications and I've seen a lot of stuff.

There have been a couple of Bloomberg articles about some startups that sort of say like ‘Okay, well what if we trained robots the same way we train large language models where you feed them just like tons and tons and tons of real world data,’ so that, yeah, sure, you still have to solve the mechanical engineering part, but then what if that allows them all this training data to do more advanced industrial things like, I don't know, make a pizza or be a more powerful human list assembly line or something like that, where it's like we see all these oppressive robots and videos, like Boston Dynamics, but I never know if any of this is quite there yet in terms of having its value.

Tracy (02:49):
Yeah, so the robotics aspect of AI is something that's incredibly interesting to me. It kind of makes me think about the world that we want to see. So it would great if we had physical robots who are able to do stuff like clean up a house or take care of an elderly family member or something like that. It's not so great if all of our technological prowess basically goes into writing satirical lyrics via ChatGPT. That's fun. I can do that myself, but what I really need is someone to vacuum or dust the house.

Joe (03:23):
Do the laundry, would be really nice. Alright, well I just want to jump right into it because we really do have the perfect guest. We're going to be speaking to someone who has been investing in AI for a long time. A lot of VCs started investing in AI last year obviously, but this is someone who has been investing in AI for quite some time before it became the hot new thing.

We spoke to him last July and had a great conversation about what he was seeing in this space. So I'm really pleased to welcome him back on the show. We're going to be speaking with Josh Wolf, co-founder and managing partner of Lux Capital. Josh, thank you so much for coming back on Odd Lots.

Josh Wolfe (04:00):
Great to be on. I feel like I should say ‘hello’ in like a robot voice.

Joe (04:06):
So what's interesting to you these days? What are you seeing out there that gets you excited?

Josh (04:10):
Well, you know, you guys started this off with AI and within AI, look, we've had what I would call maybe a little bit pejorative, a little bit lewd and crude, we've had chips, everybody knows that. We talked last time about Nvidia and AMD.

We've got chatbots. You already have some of these guys that are starting to fail. And they've raised billions of dollars and in some cases [are] just relatively undifferentiated, [there’s a] big debate between open source that is approaching the asymptote of achievement that the big private models have.

And then being a little bit lewd, you've got chicks. What do I mean by that? Most of the applications in AI are the mundane and passé on the one side, which would be like customer service and basic call center supplementation or substitution. And then at the other end you have people that are spending tens of thousands of dollars a month in some case on AI girlfriends and people that are doing what they often do with technology, which is used for prurient interests.

So that to me are the two barbell extremes in AI, where people are actually making money and profits [by] serving demand for basic human instincts I guess, and needs. That's less interesting. Overall, you're seeing a big shift from the compute piece to the energy piece, meaning people now recognize this bottleneck in AI is not going to be so much about the chips.

We also talked about this months ago when we said, look, you don't necessarily need these NVIDIA chips for inference -- the part that most people do when they're querying all these models, you do need them for training, but the power levels on these things are just enormous.

There was a Dell earnings call that I think they sort of accidentally leaked that this Nvidia B100 Blackwell chip is going to be a thousand watt draw of power, which is like 40%, 50% more than these H100 chips. Why does that matter? Because now you got to figure out how do you supply the energy for that?

And a tidbit, which I thought was interesting, was Amazon acquired a nuclear powered data center in Pennsylvania. They spent $650 million, they got about a gigawatt of power and I think that that is going to be a trend. I think it's actually going to usher in a wave of what I like to call elemental energy, but demand for nuclear power to power these big AI data centers.

Tracy (06:16):
Yeah, it's kind of funny when you think about it. I don't think anyone expected uranium to end up being an AI play, but here we are. I want to go back to something you said and you actually brought it up the last time we spoke to you last year, and it was the idea of I guess the novelty of some of these more public-facing AI projects.

And I think you pointed out last year it's really fun to use some of these things [to] like generate a bunch of cartoon versions of yourself or whatever, but it might not be a sustainable business model and it might end up being a functionality that is eventually incorporated into another platform or a different project. Have you seen any example, specific examples, of more public-facing novelty AI start to go away? I think you mentioned a few failures recently.

Josh (07:04):
Yes. You've had a whole bunch of companies that basically took ChatGPT, GPT-3 or -4 and put a wrapper around it, basically meaning give the average user who doesn't know how to use these or even do prompts some means to interact with it.

And those things raised a bunch of money, they made these things accessible and they've sort of gone away. The foundation models themselves that are undergirding all of this themselves are also starting to be relatively commoditized away and some of these things have prominent people and have raised a lot of money, but they are what I would consider failing.

Take Inflection. You’ve got Mustafa Suleyman, super smart guy, co-founder of DeepMind, [he] raised I think a billion and a half for that company. You know, I'm going to be careful here because Microsoft has been probably the savviest actor in this entire game figuring out that they can acquire things by doing things in a clever way, skirting FTC and DOJ oversight, they effectively control OpenAI as I think we also talked about, heard particularly going into the end of last year when there was all the drama around OpenAI.

Satya said ‘Look, if OpenAI went out of business, we own it, control it. We've got all the data up left, right, center all around them.’ Same thing with this company Inflection. They did I think a $650, $675 million license for the technology, that basically was a payment above and beyond what the venture investors had made.

Venture investors made a little bit of money, not a lot. Key management went over to Microsoft, but Microsoft has been very clever. So back to your question, I think the big are going to get bigger and are going to be most of the beneficiaries here. Microsoft, Adobe, Amazon -- Amazon themselves [are] coming up on the one year anniversary of Bedrock, they're going to announce that they've got the best-performing model with Anthropic, which they've made billions of dollars of investments in now sort of a competitor to open AI ChatGPT.

They're also going to announce something with one of our companies that hasn't yet been publicly disclosed, in biology. That's going to be one of the two biggest waves n: biology and what you started the conversation with, robotics.

If you just take robotics as an example, I think in one of our companies, Hugging Face, which is one of the main repositories for all these open source models, there's something like 60,000 text generation models. It is like 59,700, something like that, but just an enormous number of text generation models.

This isn't like two or three. It's like everybody's doing this, everybody's trying to do it all. Basically trying to predict the next word based on the prior one. What this transformer technology, which was invented at Google, ended up parlaying into. Guess how many robotic models there are? 59,000 in text generation. Guess how many robotic models there are?

Tracy (09:22):
A fraction of 59,000.

Josh (09:24):
I'm obviously leading with my phone here, but 19 robotic models, so you’ve got -- that to me as a venture investor, we're just always looking at where's their abundance, where is there scarcity? There’s scarcity of robotic models? Now why?

Well, it's relatively easy to train on the open internet. You've got Wikipedia, you've got YouTube videos, you know, whether you're supposed to be doing that or not. Like the woman who was asked at Sora ‘Hey, how did you train these things? Was it on YouTube?’ You probably saw that.

So there's going to be all kinds of copyright stuff on that. Robots are hard, why? Most of the robotic stuff that has been out in the world, like you talked about in your intro, is constrained in manufacturing facilities, in work cells, on an assembly line, very specific parametrically constrained, so very few degrees of freedom of what they're actually doing.

The robots themselves might have multi-access grips and controllers, but they're not moving around very freely. You have exceptions with like Amazon, which acquired Kiva, moving the warehouse inventory stuff around, but again, relatively X, Y, Z access. Not unstructured environments. You and I and our listeners, we all thrive every day in unstructured environments.

And that is where you need enormous training data. You can't search the internet for that. So how do you do it? There's a few things that have emerged and you mentioned some articles. We funded a company that recently came out of stealth, called Physical Intelligence, instead of artificial intelligence, Physical Intelligence, and it is the crème de la crème team from Stanford and Berkeley.

You've got some OpenAI folks, you've got Google DeepMind folks. They took investment from OpenAI, us and a bunch of other VCs, and they are just 24/7 training robots doing all kinds of crazy things like folding laundry, pouring deterrent, but to let these robots encounter unstructured environments and then be able to thrive in them, the next thing that you're going to see are visual models where you're effectively giving an IKEA sketch or you're drawing something and you're being able to instruct the robot to have a sense of intuitive physics of how the world works and how things might connect to each other and then learn from that.

And then we're also training these robots with simple verbal cues. So there's a video you can see online from some of the researchers where they are picking nuts and M&Ms and separating them just as a task of being able to sort and filter with precision and dexterity. And if they picked the wrong one, you can actually, instead of physically grabbing the thing, say ‘Stop, grab the M&Ms, not the nuts.’ And now it knows that.

So I think that we're about to unleash in robotics what will become a ChatGPT-like moment where people are so used to seeing robots and they see the arms and they've seen Westworld and this kind of stuff and suddenly something happens and it just blows your mind. And I think that's coming soon.

Joe (12:27):
That's pretty exciting. Because like I said, for at least a decade I've been watching those Boston whatever online, those YouTubes and at this point I'm sort of convinced that it's basically a content generator. Because it never seems like crazy robot dogs or anything ever become commercial, but maybe this is the missing link, but you've brought up three different avenues we could go down and I want to sort of eventually hit on any of them.

Here's a specific question, then we can maybe get back to robotics. This element where there is such a shortage of advanced cutting edge talent, people who really know how to do this stuff. And you mentioned that guy that got hired by Microsoft from his other companies, as an investor in AI or robotic startups, is this a dynamic that's different than in other software or other tech investing? Basically this sort of highly-skilled tech key man risk, basically?

Josh (13:19):
Yes, in that you always are looking for what's scarce and you want scarce talent. If anybody could do this, it's just not that valuable. Companies would get funded, VCs would fund, you know, 40 of them at the same time or maybe 400 of them.

Contrast [that] to things where it's very web-based or the old Groupons of yesterday, this is highly technical, often PhD scientists. The vast majority of the founding teams that we've backed at companies like Covariant or Formic or this new company, Physical Intelligence, they're all PhDs that are coming out of Stanford, Carnegie Mellon, MIT, some of the best robotics programs in the world.

And there's lineage of these great professors, many of them have passed, but for example, there's this one guy, Hans Moravec, used to be a Carnegie Mellon. I got to meet him when he was still alive, but he was one of the very early pioneers in robotics and he's got this paradox that insiders in the robotics world call the Moravec paradox, which is this weird counterintuitive phenomenon, which is basically all the stuff that we think is really hard is actually pretty easy for AI and all the stuff that we find totally intuitive and easy, like riding a bike that's really hard for robots.

So there's this great paradox that some of the most brilliant researchers are working on, which is how do we do the kind of stuff that a 4-year-old can do very intuitively with these very complex expensive machines?

Then there's all kinds of considerations. We could talk about where are these arms coming from? The acquisitions that China has been making from what historically was a lot of German companies, I mean when I say arms -- meaning the robotic arms that can move things.

And then there's this great philosophical debate that hasn't yet come to fore, but I believe will, and investors are sort of lining up on this. I'm on the opposite side of some; people are funding humanoid robots.

And the reason that I say I'm on the opposite side of it is I don't really believe in them. Yes, you would want somebody to help take care of your grandma and maybe provide some companionship, but this idea of the movies, of these Ex Machina kind of robots that embody a human form, we know that engineering is better than evolution.

If we were inventing a car tomorrow, it would be a terrible idea to take Fred Flintstone and use his feet to power these stone wheels. We know that an actuator and the axle and an engine are just better and evolution didn't create that. Why would we create these humanoid hands, where if I'm twisting the cap off of a bottle, you know, I have to turn my hand like seven times to do that. Whereas if I was just designing the perfect robot, it’d have like a little suction cup, it would go on top, it would have a drill bit mechanism, it would quickly twist it off and then it would Swiss army knife swap out for the next technical gripper capability.

So I think that people are misguided and they're basically going to end up doing things for prosthetics or something that's sort of Westworld. But I think the practical robots that we're going to all be using in our homes are going to look nothing like these humanoid robots.

Tracy (16:05):
This is funny, this is very reminiscent of a weird conversation I used to have with my dad. He had some sort of bugbear around the shape of alien's hands and he was like ‘Why are they always shown or depicted in these illustrations as having humanlike hands or sometimes even three fingers? Why wouldn't they have just evolved to the next level of very, very efficient physiology?’

Anyway, one thing I wanted to ask, and I'm trying to think how to frame this question or what the right word is, but how open source is robotics, in the sense of how much of the technology is shareable or replicable? Because I feel like one of the reasons, and you touched on it earlier, but one of the reasons we have seen this boom in AI is because you can go to places like Hugging Face and download a bunch of open source code and build off of it and it sort of multiplies around itself. But is there any aspect of that all in robotics or is it just much more proprietary?

Josh (17:11):
The hardware piece has historically been very proprietary, although there's lots of knockoff things. There is a Chinese company which is increasingly dominating the field. A lot of people don’t know the name, [it’s] called Unitree, U-N-I Tree, Unitree, that is sort of copying the Boston Dynamics robots that Joe was talking about and that you see in Black Mirror episodes and those kinds of things.

On the software side, it really, because it has the same kernel, the same origins as many of the AI software things that led to large language models and came from Transformers, academic roots -- and academics like to share and publish. And of course you can patent certain things, but by and large the early systems – something called ROS, as you might guess, Robot Operating System, people that were doing something called Arduino, which is sort of for hobbyist programmers with hardware and software at the intersection, Poppy, there's a handful of these things.

But again, now you're in this mode where you need to find training data and you need to do the work and spend the time and that costs money. So you will have a mix of open and closed models and if you take a company like physical intelligence, their motive is we want to build the operating system that any robot can basically use to navigate the world. They want to build the brains for the robots as opposed to the robots themselves.

And there's an interesting philosophical and scientific tangent which Barbara Tsversky, who is a friend and she was the partner of now the late Danny Kahneman who was also a friend, her work and she was far less famous, but I think actually [has] more important work, is all about motor function and her hypothesis is that the entire reason for the existence of the brain, the sole purpose of it is to actually produce movement, movement towards food or a mate or to run from a prey or predator, which in turn is doing the same kind of thing.

And I think that some of the most interesting philosophical questions about consciousness and memory, spatial perception, embodied cognition, gesturing, I mean I'm wildly gesticulating while I'm talking now with my hands. It's just an innate thing. Being able to do mental simulations when we think about human brains and machine brains merging, I actually think it's really going to be very revelatory as these robotic systems advance and us understanding that a lot of the purpose of thinking and intelligence is actually just about moving. And so that's a pretty cool side effect that I think is going to come from all the commercial and speculative venture stuff that we do in funding these things.

Joe (19:42):
You keep saying things that like jog me onto something else that I was meaning to ask you, but you mentioned this impulse that academics like to share their work and that reminded me of something that someone was telling me, so you know, I'm sure you are on this site a lot, but for listeners there's the site archive.org where people publish research in sort of an open source sort of ungated manner about all kinds of scientific and computer things.

And just today on the artificial intelligent page, there's like 15 new papers and they have headlines like “Autonomous Evaluation and Refinement of Digital Agents” or “A modular Benchmark Framework to Measure Progress and Improve LLM agents,” and this guy who I was talking to made the contention that like all this stuff is being published and investors, a lot of VCs don't really have the sort of technical chops to judge a lot of this research.

They're like the ‘Here, take my money’ meme. I'm curious what you see in this space where it's like there must be a lot of investors like yourself who are wowed by PhDs doing all kinds of stuff. It's like ‘Oh, we got a 100X breakthrough in the energy efficiency of this Nvidia chip by training the model differently.’ How do you evaluate the science and how risky is that for investors with this sort of plethora of ostensible breakthroughs happening left and right?

Josh (21:02):
Well, you're right, there's endless, I mean it is why there's this endless frontier in sort of the Vannevar Bush sense, and it's always very exciting and you need to have a very high filter for these things. Not everything is commercial.

Sometimes there's just a breakthrough, but maybe that breakthrough can be licensed to a company and so the people who are actually commercializing this and thinking about capital allocation and recruiting teams and then deciding these are our top three priorities and even though there's 40 other really exciting things that we could and maybe should be doing, we're just not going to do that now

That's really what company building is about. And so oftentimes we might have a brilliant scientist, but maybe that person just isn't a good salesperson. They can't tell a narrative, can't convince people to move across the country and join them. They can't raise capital and therefore they're not going to be a great entrepreneur and they're probably better off as a scientist, but when we're making evaluations it's how much money is going to accomplish in what period of time and who's going to care?

It's like if you were playing poker, you're looking at your hand, you're figuring out how much money do you have to ante up for the next round and then what's the exogenous, what's the outside view? What's the market going to say and will they care? That's why I’m – you know, we talked a little bit about this -- but very skeptical about other fields that people are funding right now in fusion or in quantum computing.

I'm so skeptical about them in part from hardened cynicism of 20 years of people pitching us things that are always about unbreakable cryptography and femtosecond and I'm like, so what? Most of the things that people promised, you know, about unbreakable cryptography or molecular modeling, people are doing, they're just not using quantum computers. They're using GPUs, they're using NVIDIA chips, they're using new algorithms.

So I've been very skeptical about that. Also been very skeptical about fusion for the same sort of reason that, to your point, there's this ignorance arbitrage that people take advantage of. They take advantage of [the fact] that investors don't fully understand something, it's hot, it's on the front page of a newspaper or magazine or whatever. It's a buzz and they want to play in it and so they invest in it and that's how you get frauds.

So we're always looking and basically trying to say, is this academic practitioner commercially minded? They're probably not going to leave their job so we're only getting them 20% of their time. Is the intellectual property real? And then oftentimes to your point about the papers, there is a high referenceability in papers. A paper that is cited enormously or immensely has a lot more credibility because you have the vainglorious error detection and correction of other scientists who are trying to shoot that person down who has all the fame for doing it, trying to seize that mantle.

And so scientists are not a benevolent bunch, they're just as competitive as an investigative journalist trying to break the story or investor or an A&R rep for a music band trying to get there before everybody else and we're no different. They're no different, but that's how all this stuff progresses.

Tracy (23:37):
Talk to us about the robotic arm. I'm going to take the bait.

Josh (23:41):
The first point I would make is I don't believe in these humanoid robot arms with fingers and, you know, high dexterity. I think it's a cool parlor trick, but I think we should have robotic arms that look more like Swiss army knives that are swapping and moving stuff and you can see a bunch of these things online, you know, different tools for different tasks and be able to do them instantaneously.

When you zoom out to the industry structure, you've got FANUC, which is a major Japanese player. They started with industrial robots. They do factory automation, but they've been a key player, probably a $25, $30 billion enterprise value company. You've got ABB, which is Swiss-Swedish, multinational industrial robot arms. Those are most of the things that you would see in like even a Tesla gigafactory or something, where Elon's talking about all their automated things and they've got some of these ABB arms.

Or the other one is kuka, KU-K-A, which is a German company and they were one of the great leaders. They got bought by a Chinese company, I want to say [in] 2016 or 2017. And China's made a bunch of I think very smart investments acquiring technologies that were a little bit before their time and I think it presents some geopolitical things that probably in five or 10 years we'll be looking at and saying ‘My gosh, how is [this] the dominant robot arm supplier or robot body supplier [a Chinese company]?’ akin to like, being dependent on TSMC.” So I think there's going to be national robot companies that form in the same way you're starting to see national AI companies form.

Joe (25:07):
How does a company like Physical Intelligence, how are they solving the data problem? Because like you said, there isn't just the equivalent of a robot internet where they can watch millions of hours of a robot arm trying to do something or humans doing something or whatever. What is the approach by which the token problem is being solved?

Josh (25:26):
I would call it the easy way to do it, which is actually quite trivial and hard, is doing what people originally did with robotics surgery. So we had a company called Auris Surgical Robotics. We sold it to J&J for $6 billion. It started with surgeons operating these things in like a telerobot, so they had little pinchers on their fingers and, you know, from five feet away or in a totally clean operating room, they were operating, but it was their hands being transmitted to the device.

And so that is the first way, which is come up with a hundred different tasks, maybe the highest frequency things like washing dishes, folding clothes -- again, unstructured environments, being able to do them in multiple different houses, multiple different heights, multiple different wet clothes, dry clothes, being able to pour coffee, being able to have the dexterity to open up a K-cup. I actually don't drink those. I think they're disgusting, but put it into a coffee machine.

Joe (26:18):
I don't drink any of that slop

Josh (26:21):
And there, it's an engineer that is operating them. And the movement of compensating for gravity, how much force, how much tension, how much pressure, that's all information. It's information that historically has not been captured and some of that is then extensible -- and this is a really cool thing.

You can see some of these robots, you might have five different robots, but they have this amazing thing called transfer learning. You teach one robot a thing and suddenly the other robot, which is disconnected, you know, or it's connected through the internet, but it can actually learn what that robot just learned and perform the task.

So that's actually pretty eerie and pretty cool. It's the same sort of thing where if I had one robot that saw where I tossed a ball in a room, but three other robots didn't know, they would instantly know because they have the eyes of the first robot. So there's all kinds of training like that.

Then there's schematics and drawings. So I was alluding to this before [with] IKEA drawings, but once you can take diagrams and schematics and actually use visual language models, it's something that OpenAI is a partner here with Physical Intelligence and they have pioneered some of that. That's going to be wild too, where you can literally just show an IKEA drawing and the robot goes with a constraint set of limited pieces that they have to put together, a set of screws and wrenches and whatnot, and they can completely assemble whatever it is a nursery thing or furniture or desk, which I think is going to be pretty wild for people to see too.

Tracy (27:43):
That would actually be a major quality of life improvement, major, major quality, not having to put together IKEA furniture. That'd be amazing.

Could you talk a little bit more about how robots are learning and I guess what the different types of learning are or different patterns of learning and what you've seen that's been most promising so far?

Josh (28:02):
You have two main categories or maybe three. You've got supervised learning. So there you've got input data. The robot is learning, it's being corrected, it's being told in some cases like I described before, whether it's voice or a gesture or nudge.

Then you have unsupervised learning where the robots are basically training on unstructured data. They get to discover patterns, they encounter the world, they encounter boundaries, gravity, those kinds of things. And it might be slower in that case, but they're reducing the dimensions for error. There's something that I coined, this term, and I call it MBTFU, which is meantime between eff ups, but you want that to be as long as possible.

If you go back to the early Roombas, a Roomba would not know if it was cleaning up a spill of chocolate milk or if your dog made a mess and it smears it like Nutella all of your floor, right? You want to increase as long as possible the mean time between basically error reduction.

Then you've got reinforcement learning, imitation learning where you might be controlling the robot or having it mimic you. There's this idea of transfer learning where a single robot learns something but it can transfer it to different robots from a different domain.

So people are trying lots of different approaches and the more different mechanisms you have, people are then going to figure out, okay, which is the least data-intensive or which has the lowest latency and is the quickest? Or which is the best system for training a robot that you put in a totally unstructured environment and without any training, sort of what they call zero shot learning, it's able to figure out from prior knowledge ‘I know that I can't go through that chair, I know that it swivels, I have to turn it this way. I know how much force I need to pick up a general Coke can,’ and those kinds of things.

And I think again, it's going to be enlightening how much we, as we navigate any given minute in our life, take for granted all this intuitive tacit knowledge that we have about the physical world. And there really is this intuitive physics of how we move around. Robots are going to learn that

Joe (30:12):
Just [a] really simple question, in the next five years, is it plausible that I'll have a robot in my house where I could take all the clothes out of the dryer, drop it into something and have it turned into folded clothes? Or if not that, what could be that ChatGPT of robots that's right around the corner?

Josh (30:31):
Well, one thing which I posited to the team because I was thinking about exactly that, you know, what would be a really cool thing, I lose stuff in our apartment all the time. We have a bunch of different rooms. I would love to basically to say ‘Has anybody seen,’ which is often what I do to my wife and three kids, ‘has anybody seen my wallet? Has anybody seen my glasses?’

Okay, just announcing that You can see that a robot with a series of things in your home that would have visual identifiers and visual learning, machine learning to be able to spot objects in a video frame could say ‘Yes, Josh, I know exactly where they are and go and retrieve them. So fetch and retrieving objects in the home to me would be a pretty cool thing. ‘Where did I put that remote?’ or whatever.

And the robot basically knows because they can go through the DVR of the home and they basically know where it is and they can fetch and retrieve it with the right physics. Folding laundry, I don't know how to handicap that. And we can do that now pretty crudely…

Joe (31:24):
That's the one I want. Because I have a small New York City apartment. I don't lose things like too much. Basically I need that folding robot.

Josh (31:32):
What are you going to pay for that though? I don't know. Would you pay five grand, 10 grand for a robot that folded your clothes? Probably not. That's why most of these things have found their way into industrial uses first. And over time they'll get cheaper and cheaper. They'll get better and better.

But look, I'm one of the few people that have an Amazon Astro. It's a robot that rolls around the house and you can tell to go into a room, you can put something in the back of it and it can do facial identification for my family. So I can say ‘Where's Quinn or Bodhi?’ And they'll find my younger kids and I can send a message. It annoys the hell out of my wife. But I think it's sort of cool and yeah, every robot that comes out, I'll be an early adopter. And yeah, we like finding these things.

Tracy (32:33):
So one of the interesting things about the current tech cycle and all this enthusiasm for AI is that so far it's been the big incumbents who seem to be winning here and part of that is because the capital investment needed is so large, the amount of data needed is so large.

When it comes to robotics, would you expect to see a similar thing and then added onto that, could you have a situation where if you are a large manufacturer or perhaps you are a company that has a lot of proprietary data like an insurance company or a financial company or something like that, could you develop your own robotics? Would that be your edge here and could you potentially just do it yourself?

Josh (33:18):
On the first case, if I look at the current landscape, the one company in sort of the big Magnificent Seven would be Amazon. Just because they have already been investing in this for a long time. Jeff Bezos is very passionate about robots. So I think there's a DNA there where doing things that enable them to do the three things that Jeff loves to do historically, which is increase choice and availability, increase convenience for customers and lower prices and factory automation, warehouses, delivery.

Even when they bought our company for a little over a billion dollars called Zoox that was doing autonomous driving, they have a long-term intention to be able to do 24/7, right-hand turn lanes navigating around the city with a human that's basically delivering last miles kind of stuff. So I could see Amazon doing that.

Microsoft, I don't really see them getting into robotics in a significant way. They have some R&D efforts, Google, and we talked a little bit about this phenomenon, but they are the third or fourth group where we have taken a team out of a Big Tech. We did it with Google with a company called Osmo to create basically Shazam for smell. We did it with a bio AI company called Evolutionary Scale out of Meta that's going to be announced more publicly soon.

And then we did it with this team out of Google that became Physical Intelligence between Google, DeepMind and OpenAI and Stanford and Berkeley. So I think that this is going to be more the startups, the beneficiaries of the money will still be Nvidia and some of the chip players, some of the hardware providers.

We need the hardware to be able to train the robots. But I really think that this is an open field and again, just going back to that stat of how many large language models there are for chatbots and how few there are for robots. I think it's a big opportunity. Now, five years from now, we might have a bubble in robots, but today I think it's a really exciting field

Joe (35:05):
Just on existing AI and competitive advantage. So when I first, in the year 2000, I think, encountered Google, I used it, I was like ‘This is way better than Yahoo or anything else.’ I never stopped after that again.

With chatbots, just going back now even talking about robotics here, i's like I got a ChatGPT Pro account or whatever early on, and I thought it was pretty cool and I used it for some stuff and then the new version of Claude came out from Anthropic and I was like ‘Oh, this is actually kind of cool and I kind of like it more.’ I don't really know why I like it more, but for some reason I like it more, and I had no problem switching over. Could it be possible that some of these core models do not prove to be as sticky or have as deep moats as people expect?

Josh (35:50):
Totally. Inflection, which launched theirs called Pi, just wasn't that good. They've now gone to Microsoft. Anthropic at first was a little bit behind. They are the most performative model. They're the best-performing one. They're the one that I also, like you, use the most why it's fastest. It has a little bit more…

Joe (36:08):
It seems to talk a little better.

Josh (36:11):
Yeah. But here's an interesting thing to your point about sort of moats and Google early on, most searches for Google and remember Google's $280 billion ad revenue generated, you know, enormous. Google, most of those are like five to 10 word searches. So you put something in you’re like, I don’t know, ‘Restaurant in West Village,’ or whatever, okay.

Perplexity, which I don't know if you've used, and we met the founder early on and we ended up not investing and it was probably a miss for us. But the founder there focused [on] ‘You know what? I'm going to do the 0.2% of searches or the 1% of searches that are longer form where people really want to ask a full question.’ And I see a lot of people that are using it and it's not coming up with some woke answer or some pithy, you know, Grok-like like response.

It's actually a well-researched footnoted, sources cited response. So to me, the two things that I use most on the chatbot side right now are Claude from Anthropic and Perplexity. But in six months there might be something totally different and the sheer number of models that are available on open source, who knows what Apple ends up doing here and that being integrated. You know, Siri sucked, but so did Alexa. Amazon's making moves. Apple will too.

But you zoom out for a second. All of this stuff in venture, venture is going through this downturn and the one area where there are very high valuations and a lot of money flowing and a lot of talent has been in AI and therefore the future returns on these things are going to be lower.

And that's why we at Lux decided, you know, we're going to focus on AI in the physical world. Having done all this stuff over the past five years, I always say there's this five-year psychological bias. Everybody wants to be invested today where you should have been five years ago. So Hugging Face and Mosaic, which we sold the Databricks, we were in that five years ago. Today, we're really interested in biology and robotics and AI's use in those.

And then I have a really weird theme, which is so sexy because it's unsexy to me, and to others. You understand accounting, [the] vast majority of venture investors and many startup people don’t. But you take CapEx, CapEx is made up of two pieces: growth and maintenance. And everybody's been funding growth, growth, growth, growth, growth and venture growth.

So I got interested in maintenance. Why? Because you have trillions of dollars of assets, infrastructure, hospital systems, energy systems, buildings that need to be maintained and generation and every new startup and every new investor always wants to do the new new thing. It's why we get new music and we get new food and we get new fashion, but there's all these neglected assets and I think you can apply new technology to maintaining these systems. And so I've become obsessed with this unsexy theme of maintenance, which I think is going to become a hot area over the next few years.

Tracy (38:55):
Wait, you mean maintenance of physical infrastructure? So the idea that you could have, I don't know, a little robot that goes around your factory or a bunch of highways and sort of surveys it for cracks or things that it thinks needs to be fixed?

Josh (39:08):
Totally. It could be infrastructure for transportation. It could be inside of hospitals where there are rote routine things. And it's also oddly, and I know you guys have covered this, but the idea that AI is really coming for the white collar workers. You know, you joked that you could talk about AI and generate a script based on AI…

Tracy (39:28):
Oh no, it wasn't a joke. Very serious.

Josh (39:31):
Yeah, they always thought that they were relatively insulated and it was the blue collar workers. But let me tell you, the guy that put me in business, Bill Conway, the founder of the Carlisle Group, he's spending all his philanthropic money, or a significant portion of it, funding nursing schools. Why? Because he identified a very high magnitude impact because we have such a shortage of nurses in this country.

That's an opportunity for maintenance where robots and technology can play a role. How do you augment and help nurses. Plumbers, we have a massive shortage of plumbers in this country. And so I actually think that the blue collar workers empowered by technology and maintaining all of these systems around us are actually going to be a winning combination.

Joe (40:09):
I want to talk about another aspect of I guess AI investing, which is that in the sort of, the SaaS wave, the 2010s decade, compute was very cheap. And so basically that part you like plug into AWS and it's sort of, yeah, I know it probably cost some money, but is not a big line item ultimately for a lot of these companies.

How does that change? In 2024 when you're dealing with an AI company and electricity bills exist or hardware accumulation, depending on where they are in the stack, how as an investor, do you think about the changing, I guess people will talk about shifting, having to spend more on CapEx versus OpEx relative to the prior generation of tech startups. How does that play out in the investments you choose?

Josh (40:56):
It's a great question. In the AI world, and then I'll give you the biology world, on the AI side, take OpenAI, these are all rumored numbers, nothing's fully confirmed, but $2 billion, maybe $3 billion of revenue. I think about 10 million people paying 20 bucks a month or thereabout and probably a hundred million users. I don’t know how many of those are unique, but they're not making money on that.

hey're losing several billion dollars today because you have these upfront costs, big CapEx, a lot of training, and then you try to maybe do some big enterprise deals. A company like Hugging Face is profitable because they're not doing, they're just hosting it and letting people run inference and then charging and making margin on that kind of stuff. That to me is interesting of the people that spent a ton of money and they've got to earn it back.

And can you get pricing power by going from 20 bucks a month to 30 bucks a month? And maybe you get that because now you have [an] Open AI premium where you have access to say Sora for video generation or something like that.

So that's going to be a big question on are these profitable investments? Not ‘Are they cool?’ Not ‘Are they world-changing?’ Absolutely. But are they profitable investments? And look, the market may not care if they're profitable. The market funds all kinds of unprofitable things that they believe in the narrative, in the story.

But thinking about fundamental businesses and the economic changes between CapEx and OpEx, I think in AI it's very hard if you are building out your data centers, trying to do your own training, your own inference, hosting these models, it's very hard.

Biology, we will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria real estate, which specializes in hosting biotech companies in all these different regions proximate to academic research centers, you will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs.

We funded some of these. There's one company called Stradios. There's a ton that are going to come on wave. And this is exciting because you can be a scientist on the beach in The Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of pouring something from a beaker into another, running a centrifuge.

And then the data that comes off of that, and this is the really cool part, then the robot and the machines will actually say to you ‘Hey, do you want to run this experiment but change these four parameters or these variables?’ And you just click a button ‘yes’ as though it's reverse prompting you and then you run another experiment.

So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, that to me is the most exciting thing. And the companies that capture that -- forget about the societal dividend -- I think are going to make a lot of money.

Tracy (43:29):
This actually reminds me of the conversation that we had regarding snack food innovation and this idea that you can use a sort of Factorio-like simulation just to run new processes through your factory and see how they would actually work out and what the supply chain might look like.

But, not to give into my five-year bias too much and overly focus on ChatGPT, but where are we in terms of context window expansion? This is something we spoke about with you last year, and I think for a lot of people it's probably one of the overriding annoyances with something like ChatGPT, the fact that you can't actually copy and paste that much text into it and that you are limited in terms of the output that it actually gives you. Have there been major advancements since we last spoke to you?

Josh (44:15):
Well, Claude 3 is one of the largest. And then you've got all kinds of interesting collaborations. You've got Nvidia and Microsoft did one with a huge number of tokens. You've got a A121 labs that has this thing called Jurassic.

Again, a lot of people are making headway here, but we are, I think a year away from you being able to upload hundreds of PDFs, thousands of books if they're not already immediately referenceable and be able to detect pattern change amongst documents, summarize and unearth the entirety of key concepts.

And then I think the most valuable thing will be at prompting you to say ‘Here's a question you didn't ask about all these documents that you just uploaded.’ So yeah, I think we're going to just keep increasing the context window. But that said, most of the history of innovation is just keep increasing this factor and then somebody else comes along and invents something.

It's like that factor doesn't matter anymore. My favorite iconic example of this is sailboats. You find these sailing ships back in the day, they just kept adding more and more sails. These things started to look ridiculous, and then somebody invents the electric motor and you have a motorboat.

So I think we'll have the same sort of thing and then people will figure out, hey, there's a better architecture here than just constantly increasing the context window. And some of that might be with memory retrieval, being able to reference other models and just go into the archive of what they have. So yeah, that's going to keep expanding.

Joe (45:48):
Josh Wolf, [of] Lux Capital, thank you so much for coming back on Odd Lots. Always great to get an update on what you're interested in.

Josh (45:55):
It was great to be with you guys.

Joe (45:58):
Good. You're preparing already to ingratiate yourself and to blend in with the humanoids.Thank you so much. That was fantastic.

First of all, Tracy, I really like talking to Josh and always like getting an update. I really do want the folding the clothes-folding robot though. I actually think that's a really big deal and would make almost everyone's life better if they didn't have to worry about folding clothes.

Tracy (46:37):
I agree, it would be far more useful to have something doing physical tasks like folding laundry versus telling you where your lost wallet is in your house.

Joe (46:46):
That’s what I want. I need that folding robot.

Tracy (46:48):
I mean, I will say, I know everyone likes to make fun of Alexa as well, but our house, we've kitted out all the lights -- they're all those smart bulbs because we don't have any overhead wirings, so everything has to be lamps. So if you didn't have a robot that was able to turn on all of your appliances at once in a room, it would be incredibly annoying because you would be going from lamp to lamp to lamp.

So it does make a difference in my daily life at least. I mean, there's so much to pull out from that. So one thing that I thought was interesting from an industrial policy perspective was Josh's discussion of some of the robotic capabilities being developed in places like China and the idea that we might have another chips, semiconductor-like situation on our hands where we wake up in 10 years and realize that a primary component of robotics is being built much more efficiently and cheaply elsewhere outside of the US or the west.

And then the other thing I thought was interesting was the idea of leapfrogging, right? So I think a lot of people, myself included, when we think of technological advances, it's like can this do this thing slightly faster? Can it do it on a slightly larger scale -- to the point about the context window and expansion there?

But you can leapfrog in technology, as Josh was saying, and you can go from the sailboat to the motorboat, or you could bypass a human evolution, for instance. And instead of having humanoid robots, you could have a Edwards Scissorhands-like thing with a Swiss Army knife on the end of his arm.

Joe (48:27):
Yeah, that made a ton of sense to me, which is if you're starting from scratch, it's not obvious that the human form that was developed over millions of years through evolution is necessarily the thing you want to create or recreate to do various tasks that you need.

There was a lot in there that I liked. The thing that he was talking about at the end, it sort of sounded like cloud kitchens but for biology labs. And so if you just have all the robots do it and then they can prompt you for other ideas. That's interesting.

It does seem exciting, the idea of ways to accumulate training data for these sort of like, you know, could maybe solve the mechanical engineering, but without, there's no equivalent of like all of the text on Reddit or Wikipedia or whatever or Google Books or YouTube. So like having to recreate that as a bottleneck for building robots was really interesting.

I love the term he used, I think it was “ignorance arbitrage,”which is a really great term. So it's like, yeah, within a lot of pure science spaces, you're going to get investors who are willing to throw money at someone who just has a really good idea on paper, that person is smart.

Tracy (49:35):
Well, I think this is also the really unusual thing about this particular cycle, which is the dominance of the incumbents and the fact that on the one hand you do have a bunch of open source software, and to some extent you can take something off of a repository and you can pitch it to investors and say ‘This is the next big thing,’ and they might not have the technological expertise to actually evaluate that.

But when it comes to making actual advancements in something like robotics, it does feel like you have to have an edge in one respect or another. You either have to have the capital to deploy or you have to have access to that data. So I don't know, I guess we'll see how it shakes out.

Joe (50:16):
I guess we'll have Josh back next year. Yeah,

Tracy (50:18):
Yeah next year,

Joe (50:19):
Next spring or summer to see what the next big thing is.

Tracy (50:22)
Hopefully he can bring a robot with him of some sort.

Joe (50:24):
A folding robot.

Tracy (50:25):
Yeah. Alright, shall we leave it?

Joe (50:28)
Let's leave it there.


You can follow Josh Wolfe at

@wolfejosh

.