Watch on Youtube, Apple Podcasts, or Spotify.
Introduction
After having written long-form essays over a weirdly diverse number of areas of the life-sciences, I am increasingly confident in my status as someone who knows a little about a lot of things. But every now and then, you meet someone who casually reveals to you an entire subfield who, up until your conversation with them, you’d never even thought of before. This happened to me when I met Sterling a few months back. We met in the elevator as we were both leaving an event, and by the time we’d reached the bottom floor, the conversation had become so interesting that we stood in the lobby for an hour as I pestered him with more and more questions.
Sterling runs a company called Iku Bio. Iku ostensibly does something quite simple: it helps biologics manufacturers figure out what to feed their cells. This is called media optimization, and it is done in an astonishingly old-fashioned way. An engineer runs a handful of experiments in a benchtop bioreactor the size of a Fiji water bottle, waits days for analytical results, and repeats, maybe three or four times before timelines force them to stop searching.
Sterling’s solution was to use printed circuit boards (PCBs)—the same green wafers inside your phone and your microwave—as the substrate for microfluidic bioreactors. Because PCBs are made via lithography, you get complexity for free. Because they’re already mass-manufactured at planetary scale, you inherit sixty years of cost optimization. And because they’re literally designed to carry electrical signals, you can embed sensors directly into the thing rather than cramming them in after the fact.
The result is a device that costs $8 per experimental lane versus $20,000 for the nearest comparable microfluidic system. And there are many, many ways for to improve from here on out.
This conversation covers the full stack: what cell culture media actually is and why it’s so much more than sugar water, why biologics manufacturing has more in common with semiconductor fabs than chemistry labs, how Sterling arrived at PCBs, and at the end of the talk, why he thinks a fair bit of lab automation is “philosophically a crime.”
Timestamps
[00:00:48] Introduction
[00:01:26] What is Iku Bio?
[00:05:00] Media optimization as the biggest lever
[00:06:23] What actually is media?
[00:13:07] Fetal bovine serum and the move to synthetic media
[00:15:10] Walk me through a media optimization workflow
[00:18:49] Why biologics manufacturing is closer to semiconductors than chemistry
[00:21:50] Matching the phase three batch and generics
[00:24:12] The 200-dimensional search space
[00:37:02] Printed circuit boards as a medium for microfluidics, and the utility of lithography
[00:40:48] Anatomy of the Iku device
[00:57:09] What sensors are on the device today?
[01:01:36] How do you use the Iku device to perform media optimization?
[01:14:44] Does media optimization survive scale-up?
[01:24:32] $8/lane vs. $20,000/lane: the economic utility of Iku’s device
[01:32:05] Why PCB microfluidics didn’t exist 10 years ago
[01:39:24] Who is the customer?
[01:43:14] What is the ultimate goal of Iku?
[01:49:07] What does the validation evidence need to look like?
[01:52:14] What would you do with $100M equity-free?
[01:57:31] Lab automation is in a strange place right now
Transcript
[00:00:48] Introduction
Abhi: Today my guest is Sterling Hooten. Sterling is the founder of Iku Bio, where he is building a microfluidic bioreactor built on a printed circuit board that cultures, senses, and streams biological data in real time, claiming 10,000x higher experimental throughput at a 100x lower cost. It is one of the most niche areas of wet lab automation that I think I’ve ever discussed on this podcast, and I don’t think I would’ve ever learned about it had I not stumbled across Sterling at an event a few months back where we had a conversation that was so fascinating that I immediately wished we had filmed it. Sterling, welcome to the podcast.
Sterling: Thank you for having me. Very big fan. Really enjoy your articles.
[00:01:26] What is Iku Bio?
Abhi: Thank you. So I’ve given a brief introduction of what you’re working on at Iku, but I’m sure I oversimplified some things. I’d like to hear your own pitch for what you’re doing there and why is it so valuable.
Sterling: So the largest problems of the 21st century — things in medicine, for climate, for material optimization — all of these are predicated on our ability to manipulate and control living matter. So advancing our understanding of biology is just so fundamental to these problems in the future, and yet the tools that we use right now to interact with biology are primitive. They’re primitive in an absolute sense, and they’re primitive in a relative sense to what we could be doing. At its core, biology is time varying, it’s parallel, and it’s sensitive. And yet the tools that we use right now — that interface destroys at least one of those properties. And in principle, advances in AI also would be an excellent connection with biology. But that interface is fundamentally broken. So lab automation right now is stuck at the Petri dish and the microtiter plate level. It’s equivalent to handwriting manuscripts in the 15th century, sometimes. And so what we’re building is a printing press for biological data. And the way that we’re doing that is we’re rethinking that interface between compute and biology, and we’re replacing traditional microfluidics with a printed circuit board that allows you to embed the fluidics — cells can live inside of it. And that allows you to communicate and control cells in a way that has not been possible before at high throughput. And the largest application that we see for that is in biologics manufacturing. Right now, biologics — it’s a half a trillion dollar industry and it’s supply limited. So every year, Samsung Biologics has to build a new $400 million facility. The reason they’re doing that is because you can only get so much out of a traditional fab plant. They’re closer to silicon fabs actually. And the largest lever that they have is in yield — so how much can you get out of these things, are they producing, and also what are the costs. The core of that comes down to literally how many of these dynamic cell culture experiments can you run. And that’s a process called media optimization. And it ends up that that one problem ends up being connected to this half a trillion dollar industry.
[00:05:00] Media optimization as the biggest lever
Abhi: So to paraphrase, if I wanted to increase biologics manufacturing by an order of magnitude — at least my capacity to produce like antibodies and the like — the lever that is most easily pushed on and most likely to give you the most bang for your buck is media optimization.
Sterling: It is the most bang for your buck. You are unlikely to get 10x on that. What you’re looking at is how much can I produce per unit time, and then how consistent is that. And if you can produce more per unit time, you get higher throughput for the entire facility. And then if you have more stability in the product — for biologics and for things that go in our bodies — that’s a desirable outcome.
Abhi: And so my conception of these bioreactors that are producing antibodies is you have a bunch of CHO cells maybe sitting in a very large tank. They’re sitting in a fluid of media and they’re constantly just excreting out these antibodies that are later purified. Iku comes in at the step of deciding what media to actually put into this tank. Is that fair to say?
Sterling: Correct. Yeah.
Abhi: What is — well, like I’ve never worked in a wet lab before.
[00:06:23] What actually is media?
Abhi: My conception of media is that it is sugar water that cells are generally fine with drinking up. I’ve learned that this is incorrect and I’d like to hear your take for what actually is media.
Sterling: I would say that that is a very limited view of what media is — not incorrect in that, if we were talking about media for growing yeast, sugar in water is pretty close to sufficient. But the more powerful way of thinking about media is that it is a very high dimensional control surface for what you can get cells to do, right? Cellular communication comes through things in the media, right? The media actually is the communication channel in a sense between cells. It’s also what carries nutrients into the cells. In mammalian cell culture, it’s closer to serum in blood. So it has either many different types of proteins in it. It’ll have different metabolites. It’ll have salts. In defined media it’ll have buffers to keep the pH. It basically has a lot of components — and there are hundreds of them really, down to things like magnesium. And each of these are really communicating and interacting with the cells. And they also work across different time periods. So you’ll have growth media, which is when you’re building up the cells, and then there’s media when you really just want them producing these particular things. And right now, if you buy or produce media internally, it tends to be connected to a particular clone or particular cell line. And so you will optimize the media for that particular cell line, or you’ll optimize media for — if you’re growing neurons. And so every — it’s complicated enough and important enough to the results that you get that exploring it is very valuable.
Abhi: Like I know that there are a few companies that have popped up claiming to technically redesign cell lines to make them better at biologics manufacturing. Does that also demand a change in media?
Sterling: It can demand — the key thing is that the biologics that we are producing now are becoming more complicated, and that is making media optimization more difficult. So you do tend to pair the cell line with a media line, both for repeatability and ease of use, also just for commercial reasons — that’s a better business. But you can — what really happens is you tend to take a standard growth media or something off the shelf, and then you will customize it for this particular thing that you’re trying to make. Because ultimately, productivity is really the interaction of these three or four things: it’s the cell line, it’s the media, it’s the process conditions or the tank that you put it in, and then the actual compound of interest and things that you’re trying to do.
Abhi: You mentioned earlier about like media is both a way — like nutrients for the cell — but is also the substrate upon which they actually communicate with each other. That second part was surprising to me. I did not naturally conceptualize cells in a tank actually talking to each other while they’re churning out antibodies. What are they communicating exactly? Does that question make sense?
Sterling: I think it’s maybe easier to think about it in the sense of our bodies, right? Cells will send out or communicate through different hormones, right? Those will get released. There are small signaling molecules that get broadcast — those are carried through the media. Well, in the body we call it blood serum, right? But in the sense, it’s media.
Abhi: You mentioned also that you have different stages of media that you want to introduce to the cells depending on the cell’s actual life cycle. Is that also true for serum in the human body? Does the body constantly adjust its own serum to whatever the cells need?
Sterling: Yeah. I mean, that is the way that cells differentiate, in a way. You’ve got some gradient that will happen, and then that gradient — that’s basically saying you’ve got different media, and that gradient can tell cells how to orient or can tell cells how to develop. And from stem cells, triggering when — what they’re going to end up being — that’s also basically — it becomes media as you add things into the cell environment there.
Abhi: So why — what’s stopping me from just replicating human serum for mammalian cells? Is that not the best substrate to use?
Sterling: Well, the first question is, where are you gonna get it?
Abhi: Well — I guess this is a more basic question. Do we understand human serum well enough to perfectly replicate it?
Sterling: Replicate it? I don’t know. What I will say — and that gets closer to what you were talking about originally — is that’s what we’ve been doing historically. But instead of using humans, which — not that — very limited supply, or limited willing supply —
[00:13:07] Fetal bovine serum and the move to synthetic media
Sterling: we’ve been using fetal bovine serum, so from calves. There are problems with that. It is highly variable. And for all of biologics manufacturing, the goal is reduce variability. And if one of your largest inputs is variable, that’s a problem. It’s also a challenge because things like — you can’t sterilize it in the traditional way. You can filter it, but you can’t heat it up without destroying — and things like prions, which could be quite bad, you would need to prevent those coming in. So the industry has really moved much towards formulated medias. So you’re building it up from the constituent parts, and that also allows you to — it reduces variation and gives you a lot more control over how you are particularly tuning that media.
Abhi: When you say like at some point fetal bovine serum was being used —
Sterling: Still. It is still in use. It’s mainly in use in research. I think — I’m — maybe there are some biologics manufacturers who are using fetal bovine serum. I don’t know. But I think the industry has pretty much moved to —
Abhi: At this point, would you consider that the synthetic serums that are attempting to recapitulate the biochemical properties of fetal bovine serum — the synthetic stuff is better? Or is it just like it’s easier to get, so you’re okay with not perfectly recapturing fetal bovine serum?
Sterling: I think it’s better.
Abhi: Okay.
Sterling: I think it’s better, and I think it’s better in that you again get to tune it.
Abhi: And so attempting to be more concrete about —
[00:15:10] Walk me through a media optimization workflow
Abhi: what is a media optimization engineer exactly doing? Let’s say I have a plate of CHO cells. I want to produce Keytruda, so pembro. I have a bunch of cells. I have all of them willing to produce the drug. They’ve been genetically edited to do that. What’s the next step?
Sterling: So the process in general is guess and check. So you will take a cell line that you’ve edited or produced for this. Most of the time it’s just — and then you’ll take it out from the freezer. You’re gonna grow it up a little bit. And then you will probably take four or five of those because you don’t kind of know yet, right — which particular strain will do best.
Abhi: So you’re trying with multiple strains.
Sterling: You’re gonna try with multiple strains. And then you will run experiments that allow you to — first you’re gonna run in microtiter plates normally, right. And you’re going to just see where are we, which of these cell lines seems like it fits best with these. After you’ve narrowed it down, you’re going to move to something that has more control. And the reason that you’re gonna move to something that has more control is that what happens in a microtiter plate is extremely disconnected from what happens in any kind of production environment. And the core reason for that has to do with flow. So in a microtiter plate, you get a lot of capillary issues, right? It changes the — you’ve got the surface tension kind of comes up, that changes the gas exchange rates. You get evaporation. And you don’t get any of the different gradients or different little bits of shear forces — all these things that actually affect how cells grow in large reactors. So what you do is you put it into what’s called a benchtop bioreactor. And so this is a little bit bigger than a Fiji bottle in terms of what it’ll contain, and it’s got an impeller in there and it’ll spin it around. So now you’re going to grow those cells in that media for 10 days or something, right? And during that time, you’re going to also change or control the pH level that’s in there. You’re going to control the temperature. You’ll set different impeller rates, seeing what’s optimal. And you’re going to run that for — one person can maybe run 12 of those experiments, 15 of those experiments. It’s pretty laborious right now to actually set those up. It’s gonna run, and during that time, you’re gonna pull off some samples. You’ll take those to the analytics section, depending on how booked up that is — that could be three days to a week sometimes to get all of your answers there. And then you’ll do that.
[00:18:49] Why biologics manufacturing is closer to semiconductors than chemistry
Abhi: I’m sorry, what questions are you asking at that point? What are the samples meant to answer?
Sterling: So ultimately, your sample is meant to answer how much total biologic did we produce in here, at what quality, right? And then the other question there is how overall — how consistent is it? Will it be — that’s actually a large sort of hidden cost, as I said. The best way to think about biologics manufacturing is to think about it as high precision manufacturing, closer to semiconductor manufacturing. That’s really the reason why Samsung Biologics is in the position that they are — because they took what they learned in terms of process control and brought that over. The reason that Fujifilm is a large manufacturer is because they took chemical process engineering and brought it over. Now, these were not biological companies, right? They are industrial manufacturing companies. And when you think about reducing process variability, one way of looking at that is how precise is the part that comes out. But then what makes up that, right, is like how much variation can we absorb without it affecting the end product? And so if you can come up with media and process conditions that are more forgiving, you’re relaxing it a bit, right? You can still end up with something that’s very precise at the end, but oh, we didn’t actually need as much — we were more forgiving over here. And that can be important because if you lose a batch of biologics, it’s very expensive. And that can happen. And it does happen. And so the way to reduce that is through media optimization. And so to finish on this — you’ve run that set of experiments, you’ve got your readout there. And those readouts, although those are the most important, you’re also going to characterize kind of everything in there that you can, because you want to see how those are affecting that actual result. Then you will repeat this. And depending on how much time you have, maybe you will get three or four runs at that, and then that’s it. And that comes down for biologics manufacturing to the regulatory reasons.
[00:21:50] Matching the phase three batch and generics
Abhi: So how much of — would you say the optimal cell lines and the optimal media — it’s like there is a threshold of quality you want to meet and after that you’re done, versus you are trying to make this as perfect as possible? Is it kind of dependent on what drug you’re trying to produce?
Sterling: I think the goal is match what was in the phase three trials. So in the process of taking a drug to market, during your phase three trials, the batch that you produced there — that is what all of the FDA’s evaluation was based on. So they want to keep that the same. So anything that deviates from that is undesirable.
Abhi: Is this true even when the drug goes off patent and the generics manufacturers — are they trying to make it even — they’re trying to improve the process even more, or even for them, they’re trying to replicate exactly what went on with the original company?
Sterling: That is a great question. I should look into that because — no, truly, because they do have to go through — so they have a couple options. The first thing is that they will basically just license the cell line and the media from the existing pharma company, right? Pay them for that. And then that way the pharma company can still get some revenue from that. The alternative is they need to come up with their own cell line and — I think the regulations are such that there’s a way of — I think it’s like if you can prove that it’s similar enough, then it just counts as a process change.
[00:24:12] The 200-dimensional search space
Abhi: And getting back to the question of actual media optimization — the media optimization person goes to the analytical chemist. The chemist tells you all you need to know about the samples that you’ve been given. You repeat this five to six times. What are the levers of change that you have over the media?
Sterling: So media is best thought of as this control surface for affecting what the cells are doing. What are the levers in there? You can change the components, and then you can change the concentration of those components, and then you can change timing of those things. And if you start with 200 or more — let’s start with 200 components that you could put in there, and then the different concentrations that they come in, and then the timing — that already is quite a large space to explore. Then you have that interacting with the cell and the different cell lines — larger space. And then with that fixed compound that you’re looking for. So the standard things that people are going to change or tune, right, is when is a carbon source coming in, and when — as you start producing different proteins, the needs of the cell change. So if you shift into a different mode for the cell — you can signal it to shift into a different mode, starts producing these other — all of a sudden its needs change.
Abhi: Mm-hmm.
Sterling: And being able to anticipate, buffer, and meet those needs — that then has a lot to do with the output.
Abhi: How much of the optimization — like even the direction or specifics of the optimization — can be theoretically known and applied versus just always empirically determined? I guess the more specific question I’m asking is, does a media optimization engineer — are they coming to every new problem almost like tabula rasa? Whatever experience they had in the past does not apply to this new cell line with this new drug.
Sterling: So the question of how tractable is this of a problem and what’s the current state of the art — the current state of the art is that best practices live in the mind of the practitioners. And a lot of that comes down to familiarity with that cell line, familiarity with the media they already have. And most manufacturers are working in a particular kind of domain or specialty, right? And so as you’re constraining that search space, it does make it easier to operate in there. However, it is not the case that you will one-shot it coming through. And then the second thing is, it’s actually reasonably easy to get caught in a local maxima. And if the cost of running those experiments or experiments themselves are sort of precious, you’re really not going to push very far out. The lever they currently use is mainly in strain engineering. And so they’ll try to select strains that’ll have the highest performance. But once those cells that you’re using are set, it does all come down to the media for optimization. In a model sense, it does seem that it’s tractable. It does seem like there’s transfer learning. How broad that really comes down to what experiments have we been able to feed into these models so far? And the answer is not very many. The largest facility that I know of for running sort of like dynamic cell culture experiments — they can run like 300.
Abhi: In parallel at any given time?
Sterling: Yeah. 300. And that’s like, the entire company is just doing that. So that’s the state of the art. And a lot of that comes back to the fact that it’s so manual.
Abhi: So the one last question I have before we move on to how Iku is fixing this — I can understand being able to easily modify concentration of the media. I understand being able to modify the timing of when you’re giving which media to the cell line. The components, the constituent components, feels a lot more complicated. Because that’s like 200 components. How much of that is like — in practice there’s 10 of them you modify at any given time, and the other 190 are pretty standard and all cell lines will need this.
Sterling: Yeah. So how much is like — what’s the core? Is there some —
Abhi: Dimensionality reduction?
Sterling: Yeah, like is there an 80/20 thing going on? Oh yeah, absolutely. Absolutely. Which, as I said, the glucose — your sugar source or carbon source, energy, the pH that you’re running at — those are, yeah, there probably are 10 that are dominating. But that’s why it’s actually so challenging — because there are 10 that are dominating, but because the system that we’re controlling is quite non-linear, it can amplify what are sometimes in certain conditions some small change. And my favorite example of this is that — this was in industrial manufacturing — but changing the amount, just changing the amount of magnesium at a particular point doubled the output. And it didn’t necessarily need — there was no a priori way of knowing that it would’ve been magnesium that went in there. And you can say, oh, okay, sure, that’s a lever and we should do that on each of these. But the problem is that potential exists for all of those other 190 things, right? So it’s like, sure, there are these core things that tend to dominate —
Abhi: But those 10 things could vary based on what the problem actually is.
Sterling: Yeah. Well, those core things of like — you do need to, the salts that are in there, right, and when energy comes into the system — those are definitely floor level. You have to figure those out. But then — and if you get those wrong, basically those are controlling the — where the floor is. So if you get those wrong, it kind of doesn’t matter what you do in these other areas. You’re not going to have high performance. But just because you get those right doesn’t mean that you have high performance at all. They’re just table stakes. You need to get those done.
Abhi: That makes sense. And so we mentioned this engineer who’s trying to produce Keytruda.
Sterling: Sure.
Abhi: They’re evidently building, at the very beginning, in a Fiji-shaped bioreactor.
Sterling: Yep.
Abhi: Doing these rounds of iteration, trying to get to something good. What is Iku’s proposal for a better way to do it?
Sterling: Our proposal is to rethink what it is that you’re trying to do when you run that experiment. So that Fiji bottle device gets used for two purposes, one of which is you want to grow cells and you want to grow them to feed a seed train. So you’re growing them, or you need that quantity of those cells. That’s one. And the second is that you need information and you need to be able to control the environment that the cells are in over time in order to get it. And so for this first set of things where you’re trying to grow a lot of cells or grow them up — great, perfect use for it. If you’re trying to extract the most amount of information and trying to control the cells, it’s a very limited way of doing it. Before starting on any of this, I’d actually seen some of these benchtop reactors and I asked them — if the thesis is that it gets better when you go smaller, why did you stop at the Fiji bottle? And the answer was, well, if we go any smaller, our sensors won’t fit. And that’s because they’re using off-the-shelf sensors. And if you ever see a photo of these things, it’s a hodgepodge of different things that have been kind of crammed in there. And that literally is — doing sensor design is its own field. And you need to design not just one type of sensor. You need to design many different types of sensors. And there’s also not that much of a benefit going from a Fiji bottle to half a Fiji bottle in size because of the manual labor and all these things. So our solution is to think about what’s actually the best platform for building sensors, and then can you put cells inside of it? And my last company was a robotics company. Any of the humanoids now that you see going on — I’m highly skeptical of the economics on these things — but any of the humanoids that you see, the core technology that enables them to move and interact with the environment — that was what we built. And that is a sensor problem. And it’s a sensor in a high-noise environment. And that is abstractly quite close to what we’re doing in biology, right? So the idea is, if you have a good place for building and placing sensors of different types around, now you’ve reduced the problem. And so, easy place to build sensors — now you just have to figure out how to grow cells inside of it and keep them alive. And if you pick a mass-manufacturable technique for doing that, it also solves some of the scaling problems. Because the challenge with controllable systems right now is that they still literally require somebody to come over, unhook everything, set it up. You can use disposables to take that down a bit. But it also takes — when you go larger, it takes more media. It’s more expensive to run it. It’s less repeatable. None of it makes sense except that it’s a difficult engineering problem.
Abhi: In a practical sense — I can buy that this form factor was chosen purely because our sensors aren’t small enough to fit in something smaller. What is the form factor that you guys have?
[00:37:02] Printed circuit boards as a medium for microfluidics, and the utility of lithography
Sterling: So the core differentiator is that we are reusing printed circuit boards, which are ubiquitous. They are in your phone, in your microwave. And we put microfluidic channels inside of them. And by doing that, it allows you to then have cells live inside. They can pass through, they can live inside there. And it turns out that making microfluidics previously that integrate those types of sensors is extremely awkward. And so you either don’t do it, or if you do do it, it’s still hand-finished. And so the big differentiator is everything comes straight from the fabricator ready to go. And this is a theme that has happened before. So in silicon photonics, which is where you take existing silicon fabs and you say, hey, can we use this in a new way? And not just to do integrated circuits, but can we now do things with light in it? Or in your iPhone, it has a light detector. That was a new way of using that. And the core there is that the process that’s used is called lithography, which is where you’ll take a mask, kind of like a snowflake, you project light down through that or something, and that causes certain things to react and certain things not. And lithography is a really powerful manufacturing technique because you get complexity for free. What that means is, normally if you’re doing traditional subtractive manufacturing, as your part gets more complex — you’ve got more nooks and crannies in here — it takes more time to make it, or you’ve got more tool changes, all these things. But with lithography, you pay that cost once. You pay that cost when you make your snowflake. But it actually doesn’t matter how complicated you make the snowflake for what’s down here. And so it pushes you to say, what’s the most complicated thing we can make here that has the most value? Because it literally costs the same. It doesn’t matter if it’s one line through here or some complicated maze. So that’s what semiconductors are doing. Then they apply that to photonics, right? LIDAR — printed circuit boards are made the same way. It’s lithography. And if you can leverage that in more complicated ways, you start both enabling capabilities that weren’t possible before, and also are riding a cost curve that’s really beneficial. So the idea is, every time that we have found as a society a new use for lithography, large industries get built off of that.
Abhi: And sorry, so where’s the lithography component coming in when you’re talking about building a new bioreactor?
Sterling: So the way that we make our chips — which you have, right?
Abhi: Yeah. Let’s — do we? Oh man. Here it comes out pretty small.
Sterling: Yeah.
[00:40:48] Anatomy of the Iku device
Abhi: I am seeing that there’s a bunch of circuits coming on from here. Walk me through the anatomy of this device.
Sterling: Sure. So the first thing is that it looks kind of cohesive, but it’s actually six layers. And each layer either is carrying electrical signals or fluidics, or routing fluids in there. And so for this particular chip, it has a channel that’s a millimeter wide and about a hundred — about the size of a human hair — tall. And that’s actually a great size for cells. And you can flow media and cells into it. And then it has all of the components that a benchtop bioreactor or a more controllable system would have. And the way that you make these is through lithography. So these lines and all of the features that are on here — there’s a snowflake kind of pattern that’s made for that. And then they will put what’s called a resist and an etch on. And so it will keep those lines where you want them and etch away everything else. And then you make the next layer, and then you make the next layer, and then you compress all of those together. And so the way to think about it is, it’s like a 2.5D space. So you’ve got a two-dimensional plane, but you’ve got multiple two-dimensional planes. And so topologically that’s going to allow you to do things like take a spiral and get to the middle, and you need to get out of it. So you can come up and out in a way and around. And it also allows you to put electrodes or different sensors in relation to the fluid, in relation to the cells in different places. And that’s kind of abstract, but let me give you a very concrete example, which would be — if you want to have a readout of electrical signals of heart cells, cardiomyocytes, you want to read across those cells. Well, you need to be able to put electrodes above and below them normally, right? Or you can put them side to side, right? If you’re trying to do these things, that’s like a primitive — that is really, it sounds very simple. And yet I will tell you, that is, with other techniques, a difficult thing to do. And so by switching to this new substrate, a whole class of problems that are traditionally quite difficult become substantially easier.
Abhi: And sorry, I don’t have a great conception of where do the cells — on this green thing, are those holes where you put the cells?
Sterling: It is, it is. And I actually have a drawing I should send to you. You can put up a drawing on this screen.
Abhi: Yeah.
Sterling: Because that is also part of the problem — from the outside it literally looks the same as any printed circuit board. Second thing is, in biotech, a printed circuit board looks like alien technology. But yeah, it has actually small holes. There are ways of getting fluids into the actual device. And then you can run them past sensors, or you can — it’s often easier to run the fluid past the cells. And then you’re kind of reading things out on the fluid.
Abhi: And so there’s not a specific chamber here where the cells sit. They’re literally in a line formation as you run fluids through them.
Sterling: In this particular chip — this particular chip is like a year old. In newer designs, you have more like a chamber. And you’re seeding that chamber and then your cells are growing over it. But the powerful thing about using this technique for making microfluidics is that you can make a large number of variations, and it’s a difficult problem in traditional microfluidics because you would need to make new molds. And a new mold is $25,000, $40,000 — you need to get a mold maker to come in and machine it. Your economics on that mean that you need to make a lot of them. With printed circuit boards, it’s easier to make variations to them and just do it. So we have a core catalog that we’re building — these are the designs for particular applications. But every new printing, it’s relatively easy to change it to whatever the condition is.
Abhi: Sorry, is it fair to say that typically microfluidics are not built using lithography, but you are building them with lithography?
Sterling: Microfluidics historically started with lithography. They were built using similar techniques used for semiconductors. And in most research labs, when people build microfluidics, that’s still the way it’s done.
Okay.
What you’ll do is you will make a silicon mold and then you cast a polymer over it. This polymer is called PDMS. And the desirable properties of it is that it’s optically — not transparent, but you can at least see into it, and it’s gas permeable. And so that allows you to have exchange of gases without — you can put it in an incubator and you can use it there. Downside of that is you can also get evaporation. The problems with that is you end up with a fragile output, and it’s also fairly labor intensive to do that. But people like it because you can do it in your own lab. The difference comes down to the use of lithography for the sensors and fluidic channels together in this thing. And critically, for silicon fabs, you need to be really careful about contaminants. So if you need, for example, a gold-plated electrode, you cannot do that in a silicon fab because you will contaminate — it’s not allowed at all. Very bad. So with the printed circuit board as a medium, basically you can integrate many more different types of sensor modalities than are possible with silicon. And then the second thing is just — the reason to use silicon is because you want extremely fine features and detail. Once you need something on the nanometer scale, it’s kind of the only option. But our thesis is that cells themselves are more on the five-micron scale, which is a few orders of magnitude difference.
Abhi: Yeah.
Sterling: And that’s actually the domain where printed circuit boards are a better place.
Abhi: Is there — so if historically people do use lithography for microfluidics, but they only use it for the channels and not the actual electronics — what innovation allowed you to actually include electronics in the design of the microfluidic?
Sterling: Yeah, so let me state that. Microfluidics is a really broad term. For example, DNA sequencing — Illumina, right? That’s using silicon for a microfluidic system. And doing the sensors. It’s a really useful place for doing that. But it has limitations in terms of where in space you can place things. The example I gave earlier about trying to read across these cardiomyocytes — you can’t do that with silicon. There’s no way to build a channel that size that you need for the cells to go through it, but it’s buried and you have electrodes above — it just — you just can’t make it that way. So the core innovation is, first of all, just conceptually thinking about printed circuit boards as a medium for making microfluidics. I’d been working with circuit boards for 10 years or something. Never occurred to me to put fluidics into them. Been talking to people about this for three years. Never met anybody who was like, oh yeah, I’ve seen that before.
Abhi: So as of today, there’s no one combining circuit boards with microfluidics?
Sterling: Not for — there is for diagnostics.
Abhi: Oh, okay.
Sterling: Yeah. So Professor Moschou at the University of Bath — she’s really the pioneer of putting fluids into the circuit board from the fabricator. And the reason that’s so important — that I keep coming back to it — is you can do a lot of things and, academics are prone to this, you can do a lot of things by hand that does not scale if you need to make hundreds of thousands or a million of things, right? If you’re doing that, you need to pick something that is mass-manufacturable. So in terms of cost and complexity, the cheapest thing to mass-manufacture for microfluidics — it’s either paper or molded things when you build a lot of it. But if you try to make microfluidics in a PCB in a lab, you can do all kinds of weird things. Getting it so that it’s compatible with the standard fabrication process — that’s a different ask, both because they’re not terribly keen on changing their processes for the most part. But then the second thing is that when you do it by hand, you’re introducing variability from the beginning. When you have it done in a fabricator, you’re inheriting the hundreds of billions of dollars that have been spent cumulatively on printed circuit board development. It’s been around for 60 years. Entire industries are built upon it being already very good. So let’s just reuse that thing that’s already quite good and low variability.
Abhi: Could you give me some intuition for how the device is actually put together? So my mental conception of lithography is you’re able to create these very fine channels in the silicon via shining light through a mask. What’s the next step after that? Maybe you do this on multiple layers to have this multi-layered system of channeled —
Sterling: Yeah. So for traditional silicon fabrication, it really is a mask and then you etch and then it’s a mask and you etch and mask and you etch. With printed circuit boards, it’s more like each layer can be made out of different materials. So this is where there’s an enormous amount of flexibility in terms of — it’s a much richer palette to start building out of. So the foundation is what’s called FR-4, which is a fiberglass structure. That’s why they’re normally green. It’s a fiberglass structure. And on top of it, it’ll come coated in a layer of copper, layer of copper on the bottom. And that is the simplest circuit board that you will buy. The cheapest one is just that, and it’s just been etched. And then they will put down what’s called basically a protective layer on it, so that you don’t just scratch off the copper. And then you’ll silkscreen it, which is if you want to put labeling and all these things. But at its core, that’s what the process is. When you add in microfluidics, there are techniques for being able to make the fluidic channels on one layer. And then as you need, you can just stack on another layer, and then that layer has fluidics, or in between them now you can route your heaters, right? You need to put your heaters there. Or if you want to put the electrodes or whatever your end sensor is, you’ll pattern that on that layer and then you sort of build it up and then you stack them together. You close it and then —
Abhi: So in V2 of this device, you have this chamber where the cells live. You have microfluidics connecting this internal chamber — maybe it’s external — to a bunch of pipes that feed in some particular axis of variation that you want to control during the media optimization process. And you also have embedded or maybe external sensors that are connected to the circuit board to have some sort of readout of what’s going on in this chamber where the cells live as the media is being applied. And what’s the output? What do you actually — what is the output of the system? I imagine one is maybe temperature, maybe another is internal humidity. What other axes are there that you can actually get straight off the sensor and straight off the device?
[00:57:09] What sensors are on the device today?
Sterling: So the way to think about it is that if you’re going to do any kind of cell culture, there are a set of table stakes that you need to be able to do in there. And those are temperature, pH, dissolved oxygen — we’re flowing things through, so you need to be able to measure flow rate. And those together — that’s the core set of things that our system is currently reading from. The next layer are the electrochemical sensors. So being able to read impedance is actually very useful. If you can read impedance for the media itself, you can detect some changes in how the media is adapting. And if you place them in relation to the cells, you can also correlate cell growth with impedance, which is based on how these charges sort of end up hitting against cell walls at different frequencies. So that’s a core thing there. You can do conductivity through it, which is partially used for offsetting where the impedance reading is coming from, because it can get interfered with in a lot of ways. And so you sort of need a reference point in order to do that. And then you can do other electrochemical techniques, like cyclic voltammetry. But the readouts right now are the impedance, flow, dissolved oxygen, pH, and temperature.
Abhi: Theoretically, I imagine all of these sensors already had miniaturized versions of them available. Is that true? Not true?
Sterling: Not the case. Not the case. Nothing that our system can do at the moment is anything that you couldn’t have done by hand or with a very custom setup. The challenge is, how do you do more than two of those, three of those, at a time? How do you build them economically? For example, the chip that I showed you, in any kind of reasonable quantities, it’s like $4 or something, $3. And that’s actually still even — you can get it down to less than a dollar on that. So if you’re buying sensors off the shelf, the economics are going to start killing you very quickly. And then the second thing is, it’s a challenge to integrate those things. So a big idea in robotics or engineering — any kind of real system — is that interfaces and connectors are what will kill you. They’re very common points of failure. So the best solution is no connectors. When you build sensors all in the same platform, you essentially get to do it with no connectors. So that’s the trade-off — harder, more difficult engineering from the outset, but lower variability and better economics at the outset.
Abhi: I imagine you get dissolved oxygen, pH, and a few of these other parameters. I imagine there’s still some you’re missing in the sense of — is the protein that I’m expecting to produce actually being produced?
Sterling: Yeah.
[01:01:36] How do you use the Iku device to perform media optimization?
Abhi: So it sounds like you’re allowed to optimize to a threshold and then after that you need the analytical chemist to come back in and do their thing.
Sterling: So our goal is to make the analytical chemist kind of a confirmation rather than be limited by it. And the reason comes down to lessons from control theory. So the first is that any system that you’re trying to control — in this case, cells — if they move at a certain rate or certain speed, and you want to be able to dampen that or amplify it, right? You need to be able to read it fast enough that you can come in and make an intervention. Anytime that you take a sample of something and do an offline measurement, that loop is normally too long, right? Sometimes that loop is five minutes or two minutes — okay, maybe you can work with that. If you need to take something to your analytical chemist, it’s probably hours or days. That information is not useful to you in the actual control of the culture, right? So what you want are real-time sensors. You want sensors that are truly integrated into the thing. For the sensors that we’re using now — that really is just the table stakes to enable us to start building in these other sensors. If you don’t have those core sensors, you can’t even keep the cells alive. There’s just no point. But being able to have live readouts of monoclonal antibodies — that is what we’re building towards in the device. It’s being able to have the optical sensors built in. It’s being able to leverage the biological techniques or chemical biology techniques that we have right now for getting signals out of cells. All of those are compatible with our system. And that’s where I think the real value starts becoming unlocked, because there’s a large difference, sort of philosophically, between just reducing the cost of something versus what questions become askable now. And the questions that become askable and the experiments that you could run — that’s what I think is so powerful about using this substrate as a technique. You make this core thing — can you grow cells in high throughput in this dynamic way? Okay. Once you have that, every new sensor system you put in gives you more lenses into it. And this comes back to why lithography is so powerful — normally you have to make a trade-off, right? Every sensor I put in, it costs me money. And so I’m only going to put in the sensors that I need here. But if it doesn’t cost us anymore, or if it’s basically trivial, then the idea is actually let’s just instrument it. Let’s just keep instrumenting it. And classically you would say, well, I don’t really care about those features and those things. Those things don’t matter. But what we’re moving towards is more of having fewer priors and having less human interpretation on the streams of data that are coming in. And so for example, the impedance sensing does not give you a simple number that comes out. It’s a complex number that comes out. Okay, whatever. You could still deal with that, but there’s a complex number across hundreds of frequencies. So you’re getting back this large readout. And then it’s changing over time. So if you and I try to decode that, it can be difficult, right? And we can argue about this, but machine learning is getting pretty good — arguably quite good at handling those types of things. And so the way that I separate these two — they’re what are called narrow-band sensors, and then they’re broadband sensors. So a narrow-band sensor is, for example, readout on temperature. You’re gonna resolve that to some either resistor variable or some Celsius basis, and you want that to pretty much just respond to temperature, not respond to anything else, right? Very easy thing to interpret. Same way with your lactate — you want something that only responds to the lactate in the media, nothing else coming out. These are narrow-band sensors. They’re meant to reject everything else. And then there are what I’m gonna call these wider-band sensors, which is — if you take a microscope and put it on something, that’s a fairly wideband, right? There’s a lot of stuff going on in there. There’s not just one answer about what’s going on. And you can sort of select — I think these things are more relevant to the questions I’m asking, or not. And things like optical, the impedance, some of these other electrochemical techniques, the magnetic fields that are in there — when you have machine learning on the other end to interpret that, it would be surprising to me that that’s not useful.
Abhi: This is maybe a naive question, but at the end of the day, all the signal you’re able to extract from this device is gonna be some electrical property of the tiny little bioreactor you have in there. Is that correct?
Sterling: No, the big picture is that we’re integrating all of these different modalities. So we are integrating the optical modality. My dream here is to get Raman sensing into — multiplexing Raman sensing across this, right? Having that method of looking at it. It’s having those with the lactate and the glucose and the monoclonal antibody readout, right? Or whatever those domains are — in an instrument sense, that’s extremely powerful. So that’s the goal.
Abhi: Okay. Interesting. I imagine some of these variables — you mentioned — are immediately interpretable. There’s a good value you should be reaching. I imagine dissolved oxygen is one of those. For the more complicated ones where you don’t know whether this is a good value or a bad value — like glucose or some other mineral — where does the ground truth come in? Is that where the analytical chemist comes in and they give one singular data point, like what’s good? And then the purpose of the system is to correlate everything that you put into the system and all these output variables you got out to that ground truth? Or something else?
Sterling: So I think a useful lens for this is from a book called How to Measure Anything. Highly recommend. This book changed my life. And the idea is the expected value of perfect information — that any reduction in uncertainty has some cost to it. So when we’re taking a measurement, there’s an economic aspect to that and therefore a trade-off. So knowing the temperature of this room — there’s not much value to us, right? Doesn’t matter whether we’re off five degrees or 0.1 degrees. For semiconductor manufacturing, matters quite a lot, right? You need really, really tight value there. So if you take that lens and you say — certainly overall, there’s a need to have precision on the readouts of how much antibody do we get out of this, and the quality of that, right? But earlier parts of the process — do you need that level of precision?
Abhi: Well, I guess at the end of the day, I imagine the whole purpose of the process is to get to antibody production. But I guess, is part of what you’re saying that there are earlier intermediate benchmarks you want to hit before you get to the antibody?
Sterling: What I’m saying is that your ultimate readout, right, is yield, titer, quality, and stability over these things. Those are the things you care about. And pretty much in that order. Even on the yield though, you’re still going to get — there’s still variation inherent in cells, right? Every batch you run, even though they’re trying to reduce variability, you’re still going to get some variation in there. So if you take a sample and you learn to two decimal points the titer that came out of that, the yield that came out of that — okay, great. But your process variability is 1% anyway, or something, 2% anyway. So knowing it to three decimal places doesn’t really help you. And then the second part of it is — if every measurement has a cost in some sense, can you change your measurement system such that you get the information that you need in a more economical way? And part of the way of doing that is by loosening constraints when possible. So ultimately, certainly you still need — you’re still gonna run it on your benchtop and your pilot things, and you are going to characterize it there, right? Because you do need ground truth from those things. But in terms of which is the right media or conditions to get to — okay, do you need two decimal points of accuracy on that? Do you need all of those readouts to do it? No.
Abhi: Is a good way of thinking about this — you start with the Iku device at the very beginning, and then once you’re happy with what you see, then you move on to the benchtop device? Allowing you to narrow your search space down to a very small number of parameters.
Sterling: Right. It would basically be like — you’re still going to end up — the process looks pretty much the same. The difference is what is the quality and speed that you came to that answer. What’s the quality of the answer you came to? What’s the speed that you came to it? And then the second part is, how many of those benchtop experiments did you need to run? Because there’s a difference between running them in an exploratory sense versus running them in a validation sense. In a validation sense, you’re just trying to make sure that things are repeatable. So you need to run, let’s say, three to five copies of it or something. But if you’re already quite confident that you’re at the optimal point, it doesn’t make sense to do the exploratory experimentation there anymore.
[01:14:44] Does media optimization survive scale-up?
Abhi: Moving on to — okay, you’ve done the Iku optimization and now it’s time to move on to the bigger things. How worried are you that moving the cells to a physically larger space forces the media optimization to move into a completely different direction?
Sterling: It’s definitely possible, and every time that you change physical shape and geometry, you do get some variation there. The confidence comes from understanding that — first of all, empirically, every microfluidic system that has flow integrated into it ends up correlating quite well with the larger system. The reason that people have hesitation about it is because they think about microfluidics that doesn’t have flow, and the recirculation effects. And that’s actually the key thing, right? It’s a question of, do you have flow in this thing or not? And how does that flow and those shear forces and the oxygen transfer rates and the gradients that you create — how are those representative of what’s going on here? So that’s one part of it. But let’s say you don’t buy any of that. The easier way is that it actually decomposes into two broad parts. There are parameters that change with scale. So these are things like your hydrostatic pressure — definitely changes with scale, right? You’re not getting away from that. Certain mixing times — these change. You can get pockets in very large reactors, right? These change. But then there are a set of parameters that empirically don’t seem to be scale-variant. And for the most part, media optimization seems to be scale-invariant.
Abhi: Do you imagine in the ideal setting that this is a closed-loop system that just continuously tries different media optimization parameters, feeds it all into a model, it plans the next round of media optimization, and that just goes in a loop?
Sterling: Yeah. So how does the — aside from running the experiments, how do you actually interpret and decide with it? So clearly the entire zeitgeist right now is about replacing the control layer with AI and models. And whether you can do that on experimental design from reading a bunch of papers and then this is the thing I’m going to build — I’m less convinced that that’s necessarily the best way. But for these types of experiments, certainly seems the way. It’s actually key for making the whole product, because otherwise you’re handed so much information back that the problem then shifts to processing it. So one of the lessons that I’ve taken from talking to people who have tried things in media optimization, tried doing cloud labs or doing these things — there’s a lot of hesitation around sharing cell lines. Understandable. And it also comes down to information about what the result of those cell lines are. So for example, a company that was running experiments externally was not allowed to look at the results of some of these analyses. It was in their contract that they’re not allowed to actually look at the results. So it’s really hard to improve or build your own model if you cannot look at the results. What we’re building is a federated model that allows the customers on-site to run the device. They can pull the model, get a new experiment design, that runs in there, and then the model weights are updated, right? This is the same way that the Tesla self-driving was trained, right? Federated learning resolves that IP-sharing complaint or constraint. And the reason that’s so powerful is that now you have a model that is learning from diverse experiments across different cell lines, at different places, but still on the same hardware. That’s really key, because otherwise there’s too much experimental variability in the data you’re getting back. And so you’re not gonna generalize well on that. And the sort of hedged bet here is that if it’s not tractable through machine learning and models, we are still building the highest throughput, most economic, and fastest way to get to that answer through still running experiments. And if it is tractable, we’re going to have the best model for running those experiments. And I think the answer is actually going to be a blend of both. I do not believe that experimentation is going away. But I do think that we will be able to get to much better answers much faster, because that’s really the ideal, right? The ideal is, once you have that model, now you can feed it in even earlier in the process, right? When you’re doing your strain engineering. So coupling those together becomes possible once you have a model.
Abhi: What parameters does the model actually intake? I imagine it takes all the inputs you’ve given into the system, all the outputs you get out of the system, and maybe what the system is actually meant to produce, and the strain itself. Is that everything or are there others?
Sterling: That’s — I think that’s a complete view.
Abhi: Okay. If the belief is that you’ll probably still need human experimentation to help the system along, and maybe the ML won’t fix everything zero-shot — can I conceptualize this as like there are 10x media optimization engineers, and they’ll be able to iterate much faster on this model system as a result of that? Or do you imagine media — bioprocess engineering is a pretty standardized field where these are the first 10,000 things you try, and maybe in the old world you get to try like 5% of that, and in the new world you try those 10,000 things? But ultimately it’s the same set of parameters that the media optimization engineer is tuning.
Sterling: So are we tuning a different, a larger set of things rather than just the engineer?
Abhi: Yeah. Like, all the knobs that the engineer usually gets to tune — do they also get to tune in the system? Or is it a subset, or maybe even larger?
Sterling: It’s a superset.
Abhi: Superset. Okay.
Sterling: You’re getting to tune far more. And it’s a superset in a few different senses. The first is that just bringing the economics down, making it automatic, allows you to — even if you had the capability previously to change a variable, you wouldn’t have essentially the budget or the time budget or the capital budget to actually exploit it. That’s one sense. The second is that it allows you to make finer interventions, with more feedback built in. So the reason for having the real-time sensors, why that’s important — what you actually want to do is be able to anticipate what the cell wants before it needs it. Because there’s always a delay between when something gets introduced into the environment to when it gets uptaken by the cell, right? So ideally I actually want to see those signals happening before the cell needs it. Now, in order to do that, you need real-time sensors that are picking up on that and starting to match that. So that’s a domain that’s just not possible —
— in other systems.
[01:24:32] $8/lane vs. $20,000/lane: the economic utility of Iku’s device
Abhi: I’m curious about — I assume there are microfluidic bioreactor systems that at least exist in the literature. How much improvement do people generally see by going to these systems versus the Fiji-shaped benchtop?
Sterling: Right now? I would say close to zero. And the reason is economic. So the one metric or lens for looking at it is just what is the all-in cost to getting that dynamic cell culture data — that one experiment, that data. And there’s two components to that. The first is, what’s your CapEx, right? How much did it cost to actually get this device in here and use this thing? And it’s really this CapEx per experimental lane. And then the second is, what is the OpEx on that? Every time that we run the experiment, how much does that cost? And so to give an example — the benchtop reactors, depending on whether you’re going with the gold standard or some of the derivative ones now, let’s say the CapEx is between $5,000 to $15,000, $20,000 for each experimental lane. And then your OpEx is — you’ve got not just the media, you need to also take the time to grow the cells up to be able to seed it. You’ve got the human coming in and running it, and then you’ve got the actual disposable, or you’ve got cleaning the thing and sterilizing it. So it ends up being around $1,500, $2,000 every experiment that you run. The closest microfluidic system in capability — it’s only four lanes and it’s $80,000. And so that gives you a per-lane cost of still $20,000. And then the disposable costs are I think still around $500, $700 for eac