Building Enterprise AI With Startup Velocity—Microsoft’s Taylor Black
AI has moved from pilot projects to boardroom priorities across large organizations. How do teams turn AI work into real business results instead of stalled experiments?
This week’s VentureFuel Visionary is Taylor Black, Director of AI & Venture Ecosystems in Microsoft’s Office of the CTO. He works at the intersection of AI strategy, venture ecosystems, and internal venture building.
In this episode, we unpack how AI takes impossible problems and makes them merely difficult, how this growth mindset of hyper-abundance is paired with the enterprise rigor and the internal velocity needed to scale.
Tune in as Taylor brings a rare dual perspective: enterprise AI leadership inside one of the world’s largest technology companies combined with firsthand startup-building experience!
![]()
Episode Highlights
- AI Starts With the Right Problem – Taylor explores why AI is not a single solution but a thinking tool, and why enterprises fail when they apply it without clearly defined, high-impact problems.
- Escaping AI Pilot Purgatory – He talks about how many AI initiatives get stuck in pilot mode, and what needs to change for experiments to turn into real, scaled business outcomes.
- Balancing Speed and Control – Taylor explains how large organizations can balance startup-level speed with enterprise-level security, governance, and operational rigor.
- Build, Buy, or Partner: It’s About the Experiment – He breaks down the build vs. buy vs. partner decision for AI, framing each option as a different way to experiment and reduce risk.
- Growth Beats Efficiency – The conversation delves into why the biggest AI wins come from growth and new possibilities, not just efficiency gains, and how leaders should rethink what success looks like.
Click here to read the episode transcript
Fred Schonenberg
Hello everyone and welcome to the VentureFuel Visionaries podcast. I'm your host Fred Schonenberg and today on the show I am joined by Taylor Black. Taylor is a technology and innovation leader working at the intersection of AI strategy, venture ecosystems, and internal venture building. He is the Director of AI and Venture Ecosystems within Microsoft's Office of the CTO, where he helps connect emerging technologies with the partners, founders, and teams needed to bring new ideas to life.
His experience spans startup building, mentorship, corporate incubation, and he brings a really grounded perspective on what it takes to move from bold concepts to actual scalable real-world impact, especially within a large organization, something we talk about a lot on the show, so I'm super excited to have him here.
We're going to unpack how enterprises and startups can collaborate better together, what makes an AI initiative worth incubating versus maybe building versus buying versus partnering, and how innovation leaders can create the conditions for experiments to become products that customers actually use. So with all that, Taylor, welcome to the show.
Taylor Black
Fred, it's great to be here. Thanks for having me.
Fred Schonenberg
So you sit at the intersection of AI strategy, venture ecosystems, inside of Microsoft, a pretty large organization. Can you talk about what the real business problem enterprises are trying to solve with AI right now, or maybe what ones they aren't, but just curious kind of beyond the hype cycle?
Taylor Black
Yeah, certainly. I think there's kind of two different ways of looking at this. First off, there isn't a single problem that they're trying to solve with AI. AI ends up being a really powerful tool for us to kind of co-think, to think better, to think more richly, more broadly, more efficiently. And in the agentic space, of course, to be able to do all of those things more efficiently, richer, better as well.
But all of that presupposes that you already have a good set of questions or a good set of problem statements that you're working on. I think all of us saw last fall with the study where a whole bunch of companies are reluctant in ROI around AI. And it's for all the same reasons that I've seen over the course of my innovation career of trying to wave a magic wand like innovation or AI over a company and hoping it does something.
When really, as you and I both know, and all of your podcast listeners too, I'm sure, it's really finding the right problem to apply this kind of technology to. And so when we're trying to solve a particular problem inside the enterprise, there's a bunch of different ways of doing that, which is why we have an ecosystem of how we try to think about solving different kinds of problems in different kinds of states, in different kinds of contexts, so that we can learn from that experimentation and move it into different parts of the company as a result.
Fred Schonenberg
I love that. Yeah. We talk a lot about the importance of finding the right problem worth solving and agreeing that if you were to solve it, it has a material impact on the business. Those pieces seem so obvious before you start thinking about AI or whatever the shiny tool is that you really have to ground yourself on why it is worth doing.
Taylor Black
Yeah. Yeah. It's hard. It's hard to do.
Fred Schonenberg
It is hard. So let me ask you this. There's an explosion of AI tools. A lot of enterprises are piloting. Curious from your vantage point, how do you separate maybe what's the enterprise ready innovations versus startup petting zoo innovation theater?
Taylor Black
Yeah. I think that it's a hard adoption curve. I think that what you need to do from an enterprise standpoint is carve out little experimental teams that slowly understand the technology and slowly understand its ramifications on the broader enterprise from a security infrastructure data standpoint so that you're able to bring the technology in carefully and circumspectly into your organization as a whole.
Part of the reason for this, of course, is because of the human actors involved. If you don't understand the tool that you're using, it's really easy to muck things up quickly. I don't know if you've seen all of the stuff happening with the bot ecosystem of downloading an agent to your personal machine and letting it go ham on all of the different things that you can do there.
Great on your personal machine, although you might still want it air gapped. But that shows a fundamental misunderstanding or perhaps not a full appreciation of what the tool is. By taking it into a specially designated group where there's a lot of experimentation friendly sorts of things, there's a lot of grace for failure, there's a lot of kind of sandboxing going on.
Fred Schonenberg
Yeah.
Taylor Black
Get around with them before letting them re-home inside the enterprise that you've created. Now, that goes for big bold projects. Clearly, there are smaller, simpler things like copilot or things like that where it's already been vetted to a certain extent and it's just a matter of letting your team, again, kind of experiment and use them to understand the best use cases for that, where you're able to realize ROI, not only on a personal level, but on a team and then group and organizational level. But the same sort of principle applies because we end up kind of being the slower piece in the adoption cycle.
Fred Schonenberg
It's really interesting. I think one of the things that a lot of organizations fear or individuals fear and organizations fear is this idea of centralizing this to like a SWAT team to really get control of the technology and test it and put all the compliance and rules on it. That takes time.
Taylor Black
It does.
Fred Schonenberg
Everybody's sort of enamored with the idea that, oh, I can take five coding. I can let everyone just five code their own solutions and we're good. It's like, well, careful what happens there. So, what do you think about that? What should be centralized? What should get pushed out to the end users? How do you balance that game?
Taylor Black
Oh, man. It's complicated. It's complicated, for sure. I think that the way of approaching this is kind of in that sort of rings and aisle thinking, right? It's almost similar to the way in which you do it as a startup where you want to make sure that you have 10 happy customers before you start reaching 400 or a thousand. Because in the end, the value prop, landing a stronger value prop for a smaller number of people ends up being more important than landing a mediocre value prop for a huge number of people in almost all business models.
Some business models are immune to that, but the vast majority are not. And it's the same sort of thing when you're trying to realize value with AI too. It's not that when you bring it in, it's going to go to the SWAT team that kind of architects it into a container and nobody is able to realize the value there. That too could be a trajectory, but it ends up being perhaps homed with the wrong team or perhaps homed with not the right diversity on a particular team where not all of the possibilities or all of the potential use cases that you want to kind of try through that technological mechanism for realizing ROI inside the company are best represented.
Because the company and the enterprise is set up in a way where there's a variety of different creative tensions going on between the kinds of risks that each department is allowed to go after or wants to caution against. I say tongue in cheek, of course, that if you let the innovation department run the company, you wouldn't have any customers in a very short period of time because everything would fall apart because they're all chasing the new shiny thing. But of course, the same is true if you let security run the company where nobody would be able to access anything because indeed that is the safest way of going about things.
So there's a creative tension too that you have to have on this kind of SWAT team that you mentioned where they are running the experiments, they are being honest about the business trade-offs of certain aspects of the technology where you're still doing justice to the use cases that are being brought forward in running them through the actual technology play.
Fred Schonenberg
That's super interesting. So one of the things that really intrigued me about your job and your background is, you know, you have called the intrapreneur, you have venture building, you've got that sort of ecosystem play. One of the things that has come up a lot in all the innovation work I've done is the idea of the buy, build, or partner. And it has like renewed energy now with AI because people feel really passionately about whether they should build everything in-house because it's become easier to build or, oh my gosh, it's moving so fast there's no way we should be building this, we should just partner. And maybe a little less on the buy side, but I'm curious when you're thinking about evaluating a new AI opportunity, how are you determining whether it should be internal, whether it should be an acquisition, or a partnership?
Taylor Black
Yeah, you know, Microsoft is in one of the unique positions of being able to pursue all of those strategies at the same time. But I think what's important to realize about that is that they're each different ways of experimenting with solving certain kinds of problems. If I want to be wild and crazy about the possibilities, a great way of experimenting with that is by putting some money into a startup so that I can earn a board seat and I can watch how things go down as they can be wild and crazy around their experimentation path.
That's a wonderful value of a startup. Enterprises just don't have that luxury, they can't do that. They have 100 million customers that are expecting their thing to work on Monday morning. And so that's one experimentation path and kind of thinking about that as a way of like, what kinds of experiments do I want to run? Is a startup a good place for that? And the partnership side, right? We do a lot of partnerships, particularly with our, not only with our business development groups, not only with our Pegasus program, and our kind of startup ecosystem programs, but also with Microsoft Research, where we might have something that we think will be a great kind of platform aspect.
But there's other aspects of the whole stack that we need to work with other people on. We'll bifurcate out the thing that we want to experiment with in conjunction with a partner who can do the experimentation around the other. If we think about it in that way, that has been a great place to run that kind of experiment. What's also what we're seeing a lot of is kind of the Aqua hire model, where we see a startup that has built a really crack team that's solving certain kinds of things. They work really well together. They understand this new AI native engineering or AI native way of working. And we want to bring them in and kind of have them not only kind of drive their technology for it, but also teach us how to operate in a certain fashion.
That's another way of experimenting with a set of problems that we're trying to solve and then seeing how that can go too. But of course, when you're a platform company like Microsoft, there are certain kinds of things that you can only do internally, particularly if they span multiple business units. It has to have that, you know, we grew it here sort of aspect, because there isn't a better way of going about it.
That itself is a different set of problems that moves at a speed that can take on different kinds of risk on a different kind of timeframe. And so part of the ecosystem work that I do and that I spend a lot of time thinking about is what experiments are we running? Why am I running it here instead of one of the other places that I can run an experiment?
Fred Schonenberg
It's super interesting. When you think about it, one of the things you said, experiment, and some people call it pilots, right? There's a million ways to refer to that. One of the things that has come up a lot in AI in particular is that a lot of groups are running lots of pilots and nothing is scaling. This idea of AI initiatives, they get stuck in pilot mode. I'm curious if you're seeing anything that helps or that is breaking that mold, that is making some of these experiments turn into real business use cases.
Taylor Black
Yeah. No, I think there's a lot to unpack there. And the two places where I see things move out of pilot into full business mode end up having two major key attributes around them. First off, the humans, every human touching the thing is fully invested and using the thing as much as they can. Now that's hard. And in some ways, it starts out being weird, right? So it's like a lumberjack with a cross cut saw, getting a chainsaw.
Like there's a bunch of things that you have to change in the way that you worked in order to realize the efficiency gains that a chainsaw has. And you're not going to realize them if you still love the old tool and the efficiencies that you've already built into your own practices around the old tool. And aren't willing to give the new tool of the college a try.
Now, part of that too is due to management, right? Because you're not, what's going to happen is you want to see the efficiency gains immediately. You don't want to see a dip back to pre-cost crosscut saw levels. You want to go for immediately from crosscut saw levels of productivity to the productivity gains of chainsaw productivity. Well, I'm sorry, this is not going to happen. That's not how humans learn how to use tools. There's going to be a big drop in productivity for a chunk of time, longer, depending on the complexity and depending on how motivated your team is, as the shift happens.
And so you've got to be willing to weather that from a management standpoint. You've got to be willing to weather that from a team dynamics standpoint, from giving grace to people who are really trying to work on the work with these new tools. And that's hard to kind of achieve when you're under the pressures of revenue and all of the other things that keep our businesses thriving, right? So that's one aspect, the human aspect there.
The second aspect, this I think is slowly being ameliorated from a tech standpoint. Definitely as we enter kind of the agentic ecosystem technology layers that are really coming to my mind this year, it was just the case that the tools didn't have enough available to them to really be as systemically useful as they could be because of the various things they just didn't have access to.
That analogy that I use often is, it's great if I have an agent on my computer and it can do certain kinds of things for me. And I learned how to use it as a tool there, but it's kind of like training my own gardener for my own lawn. Like, well, sweet. So I have a gardener for my own lawn. That's not like I can farm my gardener out to other lawns and achieve the scale of possibility there. Because until I have an agentic ecosystem, there are no roads. There is no currency. There is no phone number for me to tell them, go over to that yard.
So having this infrastructure set up and this ecosystem set up so that agentic interactions can happen so that we can realize the scale that was promised by AI here, that's a hard problem. That's a hard problem. This is why startups and I think small to medium sized businesses are able to kind of achieve things more quickly because the scope of data and security that they're working in ends up being somewhat more manageable. But that was a technical problem. We're getting to and have achieved solutions in a number of different areas along those lines. So I can now rent out the gardener that I've spent so much time training on my own lawn to other lawns and start realizing that value.
Fred Schonenberg
It's super interesting. We always use the example internally of you building the tool to mow your lawn, right? And then it's like, okay, wait a second. Can you let this loose on someone else's lawn that looks totally different? And by the way, it's in a different town. Does it come back to you with what you need and how do you realize those gains?
And I agree that agentic has the promise of connecting those pieces. Of course, that creates more risk and fear and need for security and other things like that. So it's an interesting game. I'm curious on your side, when pitching AI initiatives internally, what do you see that is resonating? Are you leaning into the cost efficiency side of things? Is it about growth?
We had a guest that was talking about sort of like a two by two of like, hey, there's internal, external, and there's efficiency and growth. And everybody's excited about AI for external growth, and nobody's using it for that. Everybody's using it for internal efficiency. I'm just curious how you think about that equation.
Taylor Black
Yeah, that resonates a lot. I see that a lot, I think as well. But at the same time, the places where we've seen the most standout business value created is in the growth space. Kevin Scott, our CTO is fond of saying, a lot of difficult problems have become trivial as a result of this technology. What impossible problems have become merely difficult? That's a different mindset, right? The mindset of efficiency is one, well, we already knew the problems from an efficiency standpoint.
And so we can continue paring away on those because they are unknown. The problem is now like, what are the unknown unknowns that we can now tackle from a growth standpoint from a hyperabundant sort of mindset? And how can we go after those? That, I think, is the true value unlock of implementing AI inside your own company, inside your own ecosystem.
We're seeing it already in major ways from a scientific discovery sort of standpoint, because that's the kind of orientation that scientists already came with, right? Unsolved problems, rather than kind of efficiency gains. And so I encourage kind of that out-of-the-box thinking, that pairing of the white space innovation with the, how can we think about this in novel ways as a result of the utilization of this technology?
And where those two overlap, and it's hard, it's a hard overlap, what we see there is just astonishing. I mean, what's fun, and maybe I'm speaking for all innovators out there, but like, boy, does that sound cool, right? Like, that's what we want to do. We want to focus on the big unknown unknowns and the art of the possible. What a cool way to think about that.
Fred Schonenberg
Very, very cool. Are there any, I'll call it capabilities or maybe traits of folks that you think are non-negotiable for teams as they're trying to scale AI inside of large, complex organizations like your own? And part of me is thinking about like, hey, the early adopters, the innovators, the folks that are out there that are like, hey, there's big potential here, but maybe the rest of the organization is not really interested in this growth mindset. Any guidance you have for them on how to be successful within a large org?
Taylor Black
Yeah, you know, I think there's a fundamental problem with people who like, who have the growth mindset and like experimentation and are happy and doing the experimentation on their own. And then the ability of having those folks show examples of how they've utilized tools for certain kinds of things that spark the insight for perhaps the slower adopters or the more fixed mindset of being excited by what's possible.
You know, I'm always astonished by the fact that we can continue dropping the fastest mile time or the fastest marathon, right? And it seems that as soon as somebody breaks it, it's now available to other people to break. And I see the visionaries, the growth mindset folks as being those people who give part of the vision there. And the problem is you can't really give part of the vision if you're not being able to showcase how that worked, how you went about it, like the achievement that you're able to unlock.
And so I think giving a space for people to show their experimentation, to show off the things that failed, to show off the ways in which they're using tools on the problems that they know well. And as a result, some of their peers know well, so that their peers are able to kind of take that and apply it to the problems more concretely rather than, you know, using AI to create a Mad Lib for your brother's baby shower, you know, because that's the only example that you've seen. That sort of experimentation, that tool use so that people have a better felt sense of how these tools operate, how these tools can work in their own felt experience, their own problems that they're trying to solve.
I find that creates a great culture on teams and a great culture within companies to see where those impossible problems, those unknown unknowns actually lie because they understand what the technology is capable of.
Fred Schonenberg
I love that. Let me, I'm not sure why I wrote it this way, but I like how I wrote it, which was like this idea of internal venture velocity, right? So inside of a large company like your own, how do you create conditions where internal ventures or internal ideas can move with more startup speed knowing it wouldn't be like a wild and crazy startup zone, but without sacrificing that enterprise rigor.
Taylor Black
Oh man, this is my favorite problem. One of the favorite problems that I've ever worked on in my entire career. I was brought into the offices of the CTO to build a kind of a venture studio internally for exactly that, right? Because one of the reasons you build things internally rather than acquire them or partner around them is that you want your DNA all over it, right? And so when I came into Microsoft, I was like, what's Microsoft's DNA? They want to have a thing that builds things that are Microsoft DNA things.
What's Microsoft's DNA? For Microsoft, at least from my point of view with regard to this, and there seems to be some solid traction for it. One is our partner in the sales ecosystem. And two is our ability to work across a variety of different business units in order to achieve kinds of levels of scale that would just be impossible to do outside.
So that kind of platform thinking and that go-to-market sort of thinking. So when I came in to start this venture studio, the first thing that I did is I spent a year and a half unpacking the operational processes necessary to get a skew for something that hadn't been already defined very discreetly by a business planning unit. Hella boring, right? Like it's as operational and non-sexy as you can get.
But what that did is it helped us solve two things. One, if you have a skew, then your partner in the sales ecosystem knows how to sell you. Two, if you already have your partner in the sales ecosystem set up to sell you, then it's very easy to graduate from an incubation to a business unit because you already have the operational path for money to flow. And now all you have is a growth problem. It's like hitting series A as a startup company. You found product market fit.
Now all you have to do is pour resources over the top of it and you're able to scale the thing. That allowed incubations to move much more quickly because they didn't have to figure out how to go to market. They used Microsoft's go-to-market. They didn't have to figure out how to transact. In fact, we would have them not transact until they got to a much larger scale because transacting ended up being so expensive and complicated inside the enterprise for incubations that it just wasn't worth it. And so a bunch of counterintuitive things that made it so that we created an antibody layer protecting the innovation in this sort of fashion from the enterprise antibodies.
Now us innovators can talk about the enterprise in a pejorative sort of way in that kind of fashion, which I don't intend to at all, right? Because the guy who's leading the business unit, who has a decision in front of them of do I take a risky bet on innovation or do I grow revenue 10%? They always make the right decision of growing revenue 10%, right? That's a no-brainer.
It's up to us as innovators to kind of paint the picture for them of how this will be revenue for them in three years or five years or seven years, depending on the kind of risk appetite for their own business unit, right? But by smoothing those operational barriers so that you can show that this innovation already has a path to transact, already has a way of doing what their business unit needs to do just table stakes in order to make revenue, that is a way of balancing the enterprise rigor by having your incubation unit go through the enterprise rigor while having the incubations themselves inside the incubation unit be able to move at startup-like velocity.
Fred Schonenberg
Yeah, man, that could be a whole another podcast. I am very interested in that. Super interesting. So let me do this. Two questions for you. One has a bunch of parts to it. The first is looking ahead three to five years out, what do you think is going to separate enterprises that truly capitalize on AI and agentic versus those that are going to fall behind?
Taylor Black
Discovering those formerly impossible but now merely difficult problems.
Fred Schonenberg
I love it. All right. So I'm going to go to a rapid fire. I'm going to throw a couple of sentences at you and just give me a brief answer for each one. Does that sound good?
Taylor Black
Yeah.
Fred Schonenberg
All right. What's the most overused word in enterprise AI right now?
Taylor Black
Efficiency.
Fred Schonenberg
Love it. What's one mistake large companies consistently make within launching AI initiatives?
Taylor Black
Giving their employees token budgets.
Fred Schonenberg
Token budgets. Love it. What's one thing startups get wrong about selling into enterprises or trying to partner with large groups?
Taylor Black
Security and privacy are hard.
Fred Schonenberg
Very, very true. If you had to pick one metric to measure AI success inside of a Fortune 500, what would it be?
Taylor Black
Role scope increase.
Fred Schonenberg
That's very cool. Over the next five years, what do you think is going to dominate in AI? Building, buying, or partnering?
Taylor Black
That's a good one. Partnering.
Fred Schonenberg
Love it. What is one capability every chief innovation or chief strategy officer should be building right now within their large org?
Taylor Black
Scaling ideation and iteration to infinity because of agentic and AI capabilities.
Fred Schonenberg
I love it. Taylor, this was awesome.
VentureFuel builds and accelerates innovation programs for industry leaders by helping them unlock the power of External Innovation via startup collaborations.
