070: Kubernetes with Joe Beda

Hosted byCharles Lowell and Elrick Ryan

May 18th, 2017.

Kubernetes

Joe Beda @jbeda | Heptio | eightypercent.net

Show Notes:

  • 00:51 - What is Kubernetes? Why does it exist?
  • 07:32 - Kubernetes Cluster; Cluster Autoscaling
  • 11:43 - Application Abstraction
  • 14:44 - Services That Implement Kubernetes
  • 16:08 - Starting Heptio
  • 17:58 - Kubernetes vs Services Like Cloud Foundry and OpenShift
  • 22:39 - Getting Started with Kubernetes
  • 27:37 - Working on the Original Internet Explorer Team

Resources:

Open Source Bridge: Enter the coupon code PODCAST to get $50 off a ticket! The conference will be held June 20-23, 2017 at The Eliot Center in downtown Portland, Oregon.

Transcript:

CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 70. With me is Elrick Ryan.

ELRICK: Hey, what's going on?

CHARLES: We're going to get started with our guest here who many of you may have heard of before. You probably heard of the technology that he created or was a key part of creating, a self-described medium deal.

[Laughter]

JOE: Thanks for having me on. I really appreciate it.

CHARLES: Joe, here at The Frontside most of what we do is UI-related, completely frontend but obviously, the frontend is built on backend technology and we need to be running things that serve our clients. Kubernetes is something that I think I started hearing about, I don't know maybe a year ago. All of a sudden, it just started popping up in my Twitter feed and I was like, "Hmm, that's a weird word," and then people started talking more and more about it and move from something that was behind me into something that was to the side and now it's edging into our peripheral vision more and more as I think more and more people adopt it, to build things on top of it.

I'm really excited to have you here on the show to just talk about it. I guess we should start by saying what is the reason for its existence? What are the unique set of problems that you were encountering or noticed that everybody was encountering that caused you to want to create this?

JOE: That's a really good set up, I think just for way of context, I spent about 10 years at Google. I learned how to do software on the server at Google. Before that, I was at Microsoft working on Internet Explorer and Windows Presentation Foundation, which maybe some of your listeners had to actually go ahead and use that. I learned how to write software for the server at Google so my experience in terms of what it takes to build and deploy software was really warped by that. It really doesn't much what pretty much anybody else in the industry does or at least did.

As my career progressed, I ended up starting this project called Google Compute Engine which is Google's virtual machine as a service, analogous to say, EC2. Then as that became more and more of a priority for the company. There was this idea that we wanted internal Google developers to have a shared experience with external users. Internally, Google didn't do anything with virtual machines hardly. Everything was with containers and Google had built up some really sophisticated systems to be able to manage containers across very large clusters of computers.

For Google developers, the interface to the world of production and how you actually launched off and monitor and maintain it was through this toolset, Borg and all these fellow travelers that come along with it inside of Google. Nobody really actually managed machines using traditional configuration management tools like Puppet or Chef or anything like that. It’s a completely different experience.

We built a compute engine, GCE and then I had a new boss because of executive shuffle and he spun up a VM and he'd been at Google for a while. His reaction to the thing was like, "Now, what?" I was like I'm sitting there at the root prompt go and like, "I don't know what to do now." It turns out that inside of Google that was actually a common thing. It just felt incredibly primitive to actually have a raw VM that you could have SSH into because there's so much to be done above that to get to something that you're comfortable with building a production grade service on top of.

The choice as Google got more and more serious about cloud was to either have everybody inside of Google start using raw VMs and live the life that everybody outside of Google's living or try and bring the experience around Borg and this idea of very dynamic, container-centric, scheduled-cluster thinking bring that outside of Google. Borg was entangled enough with the rest of Google systems that sort of porting that directly and externalizing that directly wasn't super practical. Me and couple of other folks, Brendan Burns and Craig McLuckie pitched this crazy idea of starting a new open source project that borrowed from a lot of the ideas from Borg but really melded it with a lot of the needs for folks outside of Google because again, Google is a bit of a special case in so many ways.

The core problem that we're solving here is how do you move the idea of deploying software from being something that's based on these physical concepts like virtual machines, where the amount of problems that you have to solve, to actually get that thing up and running is actually pretty great. How do we move that such that you have a higher, more logical set of abstractions that you're dealing with? Instead of worrying about what kernel you're running on, instead of worrying about individual nodes and what happens if a node goes down, you can instead just say, "Make sure this thing is running," and the system will just do its best to make sure that things are running and then you can also do interesting things like make sure 10 of these things are running, which is at Google scale that ends up being important.

CHARLES: When you say like a thing, you're talking about like a database server or API server or --?

JOE: Yeah, any process that you could want to be running. Exactly. The abstraction that you think about when you're deploying stuff into the cloud moves from a virtual machine to a process. When I say process, I mean like a process plus all the things that it needs so that ends up being a container or a Docker image or something along those lines. Now the way that Google does it internally slightly different than how it's done with Docker but you can squint at these things and you can see a lot of parallels there.

When Docker first came out, it was really good. I think at Docker and containers people look for three things out of it. The first one is that they want a packaged artifact, something that I can create, run on my laptop, run in a data center and it's mostly the same thing running in both places and that's an incredibly useful thing, like on your Mac you have a .app and it's really a directory but the finder treats it as you can just drag it around and the thing runs. Containers are that for the server. They just have this thing that you can just say, run this thing on the server and you're pretty sure that it's going to run.

That’s a huge step forward and I think that's what most folks really see in the value with respect to Docker. Other things that folks look at with containerized technology is a level of efficiency of being able to pack a lot of stuff onto a little bit of hardware. That was the main driver for Google. Google has so many computers that if you improve utilization by 1%, that ends up being real money. Then the last thing is, I think a lot of folks look at this as a security boundary and I think there's some real nuance conversations to have around that.

The goal is to take that logical infrastructure and make it such that, instead of talking about raw VMs, you're actually talking about containers and processes and how these things relate to each other. Yet, you still have the flexibility of a tool box that you get with an infrastructure level system versus if you look at something like Heroku or App Engine or these other platform as a service. Those things are relatively fixed function in terms of their architectures that you can build. I think the container cluster stuff that you see with things like Kubernetes is a nice middle ground between raw VMs and a very, very opinionated platform as a service type of thing. It ends up being a building block for building their more specialized experiences. There's a lot to digest there so I apologize.

CHARLES: Yeah, there's a lot to digest there but we can jump right into digesting it. You were talking about the different abstractions where you have your hardware, your virtual machine and the containers that are running on top of that virtual machine and then you mentioned, I think I'm all the way up there but then you said Kubernetes cluster. What is the anatomy of a Kubernetes cluster and what does that entail? And what can you do with it?

JOE: When folks talk about Kubernetes, I think there's two different audiences and it's important to talk about the experience from each audience. There’s the audience from the point of view of what it takes to actually run a cluster -- this is a cluster operator audience -- then there's the audience in terms of what it takes to use a cluster. Assuming that somebody else is running a cluster for me, what does it look like for me to go ahead and use this thing? This is really different from a lot of different dev app tools which really makes these things together. We’ve tried to create a clean split here.

I'm going to skip past what it means to launch and run a Kubernetes cluster because it turns out that over time, this is going to be something that you can just have somebody else do for you. It’s like running your own MySQL database versus using RDS in Amazon. At some point, you're going to be like, "You know what, that's a pain in the butt. I want to make that somebody else's problem."

When it comes to using the cluster, pretty much what it comes down to is that you can tell a cluster. There’s an API to a cluster and that API is sort of a spiritual cousin to something like the EC2 API. You can talk to this API -- it's a RESTful API -- and you can say, "Make sure that you have 10 of these container images running," and then Kubernetes will make sure that ten of those things are running. If a node goes down, it'll start another one up and it will maintain that. That’s the first piece of the puzzle. That creates a very dynamic environment where you can actually program these things coming and going, scaling up and down.

The next piece of the puzzle that really, really starts to be necessary then is that if you have things moving around, you need a way to find them. There is built in ideas of defining what a service is and then doing service discovery. Service discovery is a fancy name for naming. It’s like I have a name for something, I want to look that up to an IP address so that I can talk to it. Traditionally we use DNS. DNS is problematic in the super dynamic environments so a lot of folks, as they build backend systems within the data center, they really start moving past DNS to something that's a lot more dynamic and purpose-built for that. But you can think about it in your mind as a fancy super-fast DNS.

CHARLES: The customer is itself something that's abstract so I can change it state and configure it and say, "I want 10 instances of Postgres running," or, "I want between five and 15 and it will handle all of that for you." How do you then make it smart so that you can react to load, for example like all of the sudden, this thing is handling more load so I need to say... What's the word I'm looking for, I need to handle --

JOE: Autoscale?

CHARLES: Yeah, autoscale. Are there primitives for that?

JOE: Exactly. Kubernetes itself was meant to be a tool box that you can build on top of. There are some common community-built primitives for doing it's called -- excuse the nomenclature here because there's a lot of it in Kubernetes and I can define it -- Horizontal Pod Autoscaling. It's this idea that you can have a set of pods and you want to tune the number of replicas to that pod based on load. That's something that's built in. But now maybe you're cluster, you don't have enough nodes in your cluster as you go up and down so there's this idea of cluster autoscaling where I want to add more capacity that I'm actually launching these things into.

Fundamentally, Kubernetes is built on top of virtual machines so at the base, there's a bunch of virtual or physical machines hardware that's running and then it's the idea of how do I schedule stuff into that and then I can pack things into that cluster. There's this idea of scaling the cluster but then also scaling workloads running on top of the cluster. If you find that some of these algorithms or methods for how you want to scale things when you want to launch things, how you want to hook them up, if those things don't work for you, the Kubernetes system itself is programmable so you can build your own algorithms for how you want to launch and control things. It's really built from the get go to be an extensible system.

CHARLES: One question that's keeps coming up is as I hear you describing these things is the Kubernetes cluster then, it's not application-oriented so you could have multiple applications running on a single cluster?

JOE: Very much so.

CHARLES: How do you then layer on your application abstraction on top of this cluster abstraction?

JOE: An application is made up of a bunch of running bits, whether it'd be a database. I think as we move towards microservices, it's not just going to be one set of code. It can be a bunch of sets of codes that are working together or bunch of servers that are working together. There are these ideas are like I want to run 10 of these things, I want to run five of these things, I want to run three of these things and then I want them to be able to find each other and then I want to take this thing and I want to expose it out to the internet through a load balancer on Amazon, for example. Kubernetes can help to set up all those pieces.

It turns out that Kubernetes doesn't have an idea of an application. There is no actually object inside a Kubernetes called application. There is this idea of running services and exposing services and if you bring a bunch of services together, that ends up being an application. But in a modern world, you actually have services that can play double duty across applications. One of the things that I think is exciting about Kubernetes is that it can grow with you as you move from a single application to something that really becomes a service mesh, as your application, your company grows.

Imagine that you have some sort of app and then you have your customer service portal for your internal employees. You can have those both being frontend applications, both running on a Kubernetes cluster, talking to a common backend with a hidden API that you don't expose to customers but it's something that's exposed to both of those frontends and then that API may talk to a database. Then as you understand your problems, you can actually spawn off different microservices that can be managed separately by different teams. Kubernetes becomes a platform where you can actually start with something relatively simple and then grow with that and have it stretch from single application to multiple service microservice-base application to a larger cluster that can actually stretch across multiple teams and there's a bunch of facilities for folks not stepping on each other's toes as they do this stuff.

Just to be clear, this is what Kubernetes is as it's based. I think one of the powerful things that you can do is that there's a whole host to folks that are building more platform as a service like abstractions on top of Kubernetes. I'm not going to say it's a trivial thing but it's a relatively straightforward thing to build a Heroku-like experience on top of Kubernetes. But the great thing is that if you find that that Heroku experience, if some of the opinions that were made as part of that don't work for you, you can actually drop down to a level that's more useful than going all the way down to raw VM because right now, if you're running on Heroku and something doesn't work for you, it's like, "Here's a raw VM. Good luck with that." There's a huge cliff as you actually want to start coloring outside the lines for, as I mix my metaphors here for these platform services.

ELRICK: What services that are out there that you can use that would implement Kubernetes?

JOE: That's a great question. There are a whole host there. One of the folks in the community has pulled together a spreadsheet of all the different ways to install and run Kubernetes and I think there were something like 60 entries on it. It's an open source system. It's credibly adaptable in terms of running in all sorts of different mechanisms for places and there are really active startups that are helping folks to run that stuff.

In terms of the easiest turnkey things, I would probably start with Google Container Engine, which is honestly one click. It fits within a Free Tier. It can get you up and running so that you can actually play with Kubernetes super easy. There's this thing from the folks at CoreOS called minikube that lets you run it on your laptop as a development environment. That's a great way to kick the tires. If you're on Amazon, my company Heptio has a quick start that we did with some of the Amazon community folks. It's a cloud formation template that launches a Kubernetes stack that you can get up and running and really understand what's happening.

I think as users, understand what value it brings at the user level then they'll figure out whether they want to invest in terms of figuring out what the best place to run and the best way to run it for them is. I think my advice to folks would be find some way to start getting familiar with it and then decide if you have to go deep in terms of how to be a cluster operator and how to run the thing.

ELRICK: Yup. That was going to be my next question. You just brought up your company, Heptio. What was the reason for starting that startup?

JOE: Heptio was founded by Craig McLuckie, one of the other Kubernetes founders and me. We started about six months or seven months ago now. The goal here is to bring Kubernetes to enterprises and how do we bridge the gap of bringing some of this technology forward company thinking to think about companies like Google and Twitter and Facebook. They have a certain way of thinking about building a deployment software. How do we bring those ideas into more mainstream enterprise? How do we bridge that gap and we're really using doing Kubernetes as the tool to do that?

We’re doing a bunch of things to make that happen. The first being that we're offering training, support and services so right now, if companies want to get started today, they can engage with us and we can help them understand what makes sense there. Over time, we want to make that be more self-service, easier to do so that you actually don't have to hire someone like us to get started and to be successful there. We want to invest in the community in terms of making Kubernetes easier to approach, easier to run and then more applicable to a more diverse set of audiences.

This conversation that we're having here, I'm hoping that at some point, we won't have to have this because Kubernetes will be easy enough and self-describing enough that folks won't feel like they have to dig deep to get started. Then the last thing that we're going to be doing is offering commercial services and software that really helps teach Kubernetes into the fabric of how large companies work. I think there's a set of tools that you need as you move from being a startup or a small team to actually dealing within the structure of a large enterprise and that's really where we're going to be looking to create and sell product.

ELRICK: Gotcha.

CHARLES: How does Kubernetes then compare in contrast to other technologies that we hear when we talk about integrating with the enterprise and having enterprise clients managing their own infrastructure things like Cloud Foundry, for example. From someone who's kind of ignorant of both, how do you discriminate between the two?

JOE: Cloud Foundry is a more of a traditional platform as a service. There's a lot to like there and there are some places where the Kubernetes community and the Cloud Foundry community are starting to cooperate. There is a common way for provisioning and creating external services so you can say, "I want MySQL database." We're trying to make that idea of, "Give me MySQL database. I don't care who and where it's running." We're trying to make those mechanisms common across Cloud Foundry and Kubernetes so there is some effort going in there.

But Cloud Foundry is more of a traditional platform as a service. It's opinionated in terms of the right way to create, launch, roll out, hooks services together. Whereas, Kubernetes is more of a building block type of thing. Kubernetes is, at least raw Kubernetes in some ways a more of a lower levels building block technology than something like Cloud Foundry. The most applicable competitor in this world to Cloud Foundry, I would say would be OpenShift from Red Hat.

Open Shift is a set of extensions built on top of it. Right now, it's a little bit of a modified version of Kubernetes but over time that teams working to make it be a set of pure extensions on top of Kubernetes that adds a platform as a service layer on top of the container cluster layer. The experience for Open Shift will be comparable to the experience for Cloud Foundry. There's other folks like Microsoft just bought the small company called Deis. They offer a thing called Workflow which gives you a little bit of the flavor of a platform as a service also. There's multiple flavors of platforms built on top of Kubernetes that would be more apples to apples comparable to something like Cloud Foundry.

Now the interesting thing with thing Deis' Workflow or Open Shift or some of the other platforms built on top of Kubernetes is that, again if you find yourself where that platform doesn't work for you for some reason, you don't have to throw out everything. You can actually start picking and choosing what primitives you want to drop down to in the Kubernetes world without having to go down to raw VMs. Whereas, Cloud Foundry really doesn't have a widely supported, sort of more raw interface to run in containers and services. It's kind of subtle.

CHARLES: Yeah, it's kind of subtle. This is an analogy that just popped into my head while I was listening to you and I don't know if this is way off base. But when you were describing having... What was the word you used? You said a container clast --? It was a container clustered...

JOE: Container orchestrator, container cluster. These are all --

CHARLES: Right and then kind of hearkening back to the beginning of our conversation where you were talking about being able to specify, "I want 10 of these processes," or an elastic amount of these processes that reminded me of Erlang VM and how kind of baked into that thing is the concept of these lightweight processes and be able to manage communication between these lightweight processes and also supervise these processes and have layers of supervisors supervising other supervisors to be able to declare a configuration for a set of processes to always be running. Then also propagate failure of those processes and escalate and stuff like that. Would you say that there is an analogy there? I know there are completely separate beast but is there a co-evolution there?

JOE: I've never used Erlang in Anger so it's hard for me to speak super knowledgeably about it. For what I understand, I think there is a lot in common there. I think Erlang was originally built by Nokia for telecoms switches, I believe which you have these strong availability guarantees so any time when you're aiming for high availability, you need to decouple things with outside control loops and ways to actually coordinate across pieces of hardware and software so that when things fail, you can isolate that and have a blast radius for a failure and then have higher level mechanisms that can help recover. That's very much what happens with something like Kubernetes and container orchestrator. I think there's a ton of parallels there.

CHARLES: I'm just trying to grasp at analogies of things that might be --

ELRICK: I think they call that the OTP, Open Telecom Platform or something like that in Erlang.

CHARLES: Yeah, but it just got a lot of these things --

ELRICK: Very similar.

CHARLES: Yeah, it seems very similar.

ELRICK: Interestingly enough, for someone that's starting from the bottom, an initiated person to Kubernetes containers, Docker images, Docker, where would they start to ramp up themselves? I know you mentioned that you are writing a book --?

JOE: Yes.

ELRICK: -- 'Kubernetes: Up and Running'. Would that be a good place to start when it comes out or is there like another place they should start before they get there. What is your thoughts on that?

JOE: Definitely, check out the book. This is a book that I'm writing with Kelsey Hightower who's one of the developer evangelists for Google. He is the most dynamic speaker I've ever seen so if you ever have a chance to see him live, it's pretty great. But Kelsey started this and he's a busy guy so he brought in Brendan Burns, one of the other Kubernetes co-founders and me to help finish that book off and that should be coming out soon. It's Kubernetes: Up and Running. Definitely check that out.

There’s a bunch of good tutorials out there also that start introducing you to a lot of the concepts in Kubernetes. I'm not going to go through all of those concepts right now. There's probably like half a dozen different concepts and terminology, things that you have to learn to really get going with it and I think that's a problem right now. There's a lot to import before you can get started. I gave a talk at the Kubernetes Conference in Berlin, a month or two ago and it was essentially like, yeah we got our work cut out for us to actually make the stuff applicable to wider audience.

But if you want to see the power, I think one of the things that you can do is there's the system built on top of Kubernetes called Helm, H-E-L-M, like a ship's helm because we love our nautical analogies here. Helm is a package manager for Kubernetes and just like you can login to say, in Ubuntu machine and do apps get install MySQL and you have a database up and running. With Helm you can say, create and install 'WordPress install' on my Kubernetes cluster and it'll just make that happen.

It takes this idea of package management of describing applications up to the next level. When you're doing regular sysadmin stuff, you can actually go through and do the system to [Inaudible] files or to [Inaudible] files and copy stuff out and use Puppet and Chef to orchestrate all of that stuff. Or you can take the stuff that sort of package maintainers for the operating system have done and actually just go ahead and say, "Get that installed." We want to be able to offer a similar experience at the cluster level. I think that's a great way to start seeing the power. After you understand all these concepts here is how easy you can make it to bring up and run these distributed systems that are real applications.

The Weaveworks folks, there are company that do container networking and introspection stuff based out of London. They have this example application called Sock Shop. It's like the pet shop example but distributed and built to show off how you can build an application on top of Kubernetes that pulls a lot of moving pieces together. Then there's some other applications out there like that that give you a little bit of an idea of what things look like as you start using this stuff to its fullest extent.

I would say start with something that feels concrete where you can start poking around and seeing how things work before you commit. I know some people are sort of depth first learners and some are breadth first learners. If you're depth first, go and read the book, go to Kubernetes documentation site. If you're breadth first, just start with an application and go from there.

ELRICK: Okay.

CHARLES: I think I definitely fall into that breadth first. I want to build something with it first before trying to manage my own cluster.

ELRICK: Yeah. True. I think I watched your talk and I did watch one of Kelsey's talks: container management. There was stuff about replicators and schedulers and I was like, "The ocean just getting deeper and deeper," as I listened to his talk.

JOE: Actually, I think this is one of the cultural gaps to bridge between frontend and backend thinking. I think a lot of backend folks end up being these depths first types of folks, where when they want to use a technology, they want to read all the source code before they first apply it. I'm sure everybody has met those type of developers. Then I think there's folks that are breadth first where they really just want to understand enough to be effective, they want to get something up and running, they want to like if they hit a problem, then they'll go ahead and fix that problem but other than that, they're very goal-oriented towards, I want to get this thing running. Kubernetes right now is kind of built by systems engineers for systems engineers and it shows so we have our work cut out for us, I think to bridge that gap. It's going to be an ongoing thing.

ELRICK: Yeah, I'm like a depth first but I have to keep myself in check because I have to get work done as a developer.

[Laughter]

JOE: That sounds about right, yeah. Yeah, so you're held accountable for writing code.

CHARLES: Yeah. That's where real learning happens when you're depth first but you've got deadlines.

ELRICK: Yes.

CHARLES: I think that's a very effective combination. Before we go, I wanted to switch topic away from Kubernetes for just a little bit because you mentioned something when we were emailing that, I guess in a different lifetime you were actually on the original IE team or at the very beginning of the Internet Explorer team at Microsoft?

JOE: Yes, that's where I started my career. Back in '97, I've done a couple of internships at Microsoft and then went to join full time, moved up here to Seattle and I had a choice between joining the NT kernel team or the Internet Explorer team. This was after IE3 before IE4. I don't know if this whole internet thing is going to pan out but it looks like that gives you a lot of interesting stuff. You got to understand the internet, it wasn't an assumed thing back then, right?

ELRICK: Yeah, that's true.

JOE: I don't know, this internet thing.

CHARLES: I know. I was there and I know that like old school IE sometimes gets a bad rap. It does get a bad rap for being a little bit of an albatross but if you were there for the early days of IE, it really was the thing that blew it wide open like people do not give credit. It was extraordinarily ahead of its time. That was [Inaudible] team that coin DHTML back to when it was called DHTML. I remember, actually using it for the first time, I think about '97 is about what I was writing raw HTML for everything. CSS wasn't even a thing hardly. When I realized, all these static things when we render them, they're etched in stone. The idea that every one of these properties which I already knew is now dynamic and completely reflected, just moment to moment. It was just eye-opening. It was mind blowing and it was kind of the beginning of the next 20 years. I want to just talk a little bit about that, about where those ideas came from and what was the impetus for that?

JOE: Oh, man. There's so much history here. First of all, thank you for calling out. I think we did a lot of really interesting groundbreaking work then. I think the sin was not in IE6 as it was but in [inaudible]. I think the fact that --

CHARLES: IE6 was actually an amazing browser. Absolutely an amazing browser.

JOE: And then the world moved past it, right? It didn't catch up. That was the problem. For its time when it was released, I was proud of that release. But four years on, things get a little bit long in the tooth. I think IE3 was based on rendering engine that was very static, very similar to Netscape at the time. The thing to keep in mind is that Netscape at that time, it would download a webpage, parse it and display it. There was no idea of a DOM at Netscape at that point so it would throw away a lot of the information and actually only store stuff that was very specific to the display context.

Literally, when you resize the window for Netscape back then, it would actually reparse the original HTML to regenerate things. It wasn't even able to actually resize the window without going through and reparsing. What we did with IE4 -- and I joined sort of close to the tail on IE4 so I can't claim too much credit here -- is bringing some of the ideas from something like Visual Basic and merge those into the idea of the browser where you actually have this programming model which became the DOM of where your controls are, how they fit together, being able to live modify these things. This was all part and parcel of how people built Windows applications.

It turns out that IE4 was the combination of the old IE3 rendering engine, sort of stealing stuff from there but then this project that was built as a bunch of Active X controls for Office called [inaudible]. As you smash that stuff together and turn it into a browser rendering engine, that browser rendering engine ended up being called Trident. That's the thing that got a nautical theme. I don't think it's connected and that's the thing that that I joined and started working on at the time.

This whole idea that you have actually have this DOM, that you can modify a programmable representation of DHTML and have it be live updated on screen, that was only with IE4. I don't think anybody had done it at that point. The competing scheme from Netscape was this thing called layers where it was essentially multiple HTML documents where you could replace one of the HTML documents and they would be rendered on top of each other. It was awful and it lost to the mist of time.

CHARLES: I remember marketing material about layers and hearing how layers was just going to be this wonderful thing but I don't ever remember actually, did they ever even ship it?

JOE: I don't know if they did or not. The thing that you got to understand is that anybody who spent any significant amount of time at Microsoft, you just really internalize the idea of a platform like no place else. Microsoft lives and breathes platforms. I think sometimes it does them a disservice. I've been out of Microsoft for like 13 years now so maybe some of my knowledge is a little outdated here but I still have friends over there.

But Microsoft is like the poor schmuck that goes to Vegas and pulls the slot machine and wins the jackpot on the first pull. I'm not saying that there wasn't a lot of hard work that went behind Windows but like they hit the goldmine with that from a platform point of view and then they essentially did it again with Office. You have these two incredibly powerful platforms that ended up being an enormous growth engine for the company over time so that fundamentally changed the world view of Microsoft where they really viewed everything as a platform.

I think there were some forward thinking people at Netscape and other companies but I think, Microsoft early on really understood what it meant to be a platform and we saw back then what the web could be. One of the original IE team members, I'm going to give a shout out to him, Chris Wilson who's now on the Chrome team, I think. I don't know where he is these days. Chris was on the original IE team. He’s still heavily involved in web standards. None of this stuff is a surprise to us.

I look at some of the original so after we finished IE6, a lot of the IE team rolled off to doing Avalon which became Windows Presentation Foundation, which was really looking to sort of reinvent Windows UI, importing a bunch of the ideas from web and modern programming there. That's where we came up with XAML and eventually begat Silverlight for good or ill. But some of our original demos for Avalon, if you go back in time and look at that, that was probably... I don't know, 2000 or something like that. They're exactly the type of stuff that people are building with the web platform today.

Back then, they'll flex with the thing. We're reinventing this stuff over and over again. I like where it's going. I think we're in a good spot right now but we see things like the Shadow DOM come up and I look at that and I'm like, "We had HTC controls which did a lot of Shadow DOM stuff like stuff in IE early on." These things get reinvented and refined over time and I think it's great but it's fascinating to be in the industry long enough that you can see these patterns repeat.

CHARLES: It is actually interesting. I remember doing UI in C++ and in Java. We did a lot of Java and it was a long time. I felt like I was wandering in the wilderness of the web where I was like, "Oh, man. I just wish we had these capabilities of things that we could do in swing, 10 or 15 years ago," but the happy ending is that I really actually do feel we are in a place now, finally where you have options for it really is truly competitive as a developer experience to the way it was, these many years ago and it's also a testament just how compelling the deployment model of the web is, that people were willing to forgo all of that so they could distribute their applications really easily.

JOE: Never underestimate the power of view source.

CHARLES: Yeah.

[Laughter]

ELRICK: I think that's why this sort of conversations are very powerful, like going back in time and looking at the development up until now because like they say that people that don't know their history, they're doomed to repeat it. I think this is a beautiful conversation.

JOE: Yeah. Because I've done that developer focused frontend type of stuff. I've done the backend stuff. One of the things that I noticed is that you see patterns repeat over and over again. Let's be honest, it probably more like a week, I was going to say a weekend and learn the React the other day and the way that it encapsulate state up and down, model view, it's like these things are like there's different twists on them that you see in different places but you see the same patterns repeat again and again.

I look at the way that we do scheduling in Kubernetes. Scheduling is this idea that you have a bunch of workloads that have a certain amount of CPU and RAM that they require like you want to play this Tetris game of being able to fit these things in, you look at scheduling like that and there are echoes for how layout happens in a browser. There is a deeper game coming on here and as you go through your career and if you're like me and you always are interested in trying new things, you never leave it all behind. You always see things that influence your thinking moving forward.

CHARLES: Absolutely. I kind of did the opposite. I started out on the backend and then moved over into the frontend but there's never been any concept that I was familiar with working on server side code that did not come to my aid at some point working on the frontend. I can appreciate that fully.

ELRICK: Yup. I can agree with the same thing. I jump all around the board, learning things that I have no use currently but somehow, they come back to help me.

CHARLES: That will come back to help you. You thread them together at some point.

ELRICK: Yup.

CHARLES: As they said in one of my favorite video games in high school, Mortal Kombat there is no knowledge that is not power.

JOE: I was all Street Fighter.

CHARLES: Really?

[Laughter]

JOE: I cut class in high school and went to play Street Fighter at the mall.

CHARLES: There is no knowledge that isn't power except for... I'm not sure that the knowledge of all these little mashy key buttons combinations, really, I don't think there's much power in that.

JOE: Well, the Konami code still shows up all the time, right?

[Laughter]

CHARLES: I'm surprised how that's been passed down from generation to generation.

JOE: You still see it show up in places that you wouldn't expect. One of the sad things that early on in IE, we had all these Internet Explorer Easter eggs where if you type this right combination into the address bar, do this thing and you clicked and turn around three times and face west, you actually got this cool DHTML thing and those things are largely disappearing. People don't make Easter eggs like they used to. I think there's probably legal reasons for making sure that every feature is as spec. But I kind of missed those old Easter eggs that we used to find.

CHARLES: Yeah, me too. I guess everybody save their Easter eggs for April 1st but --

JOE: For the release notes, [inaudible].

CHARLES: All right. Well, thank you so much for coming by JOE. I know I'm personally excited. I'm going to go find one of those Kubernetes as a services that you mentioned and try and do a little breadth first learning but whether you're depth first or breadth first, I say go to it and thank you so much for coming on the show.

JOE: Well, thank you so much for having me on. It's been great.

CHARLES: Before we go, there is actually one other special item that I wanted to mention. This is the Open Source Bridge which is a conference being held in Portland, Oregon on the 20th to 23rd of June this year. The tracks are activism, culture hacks, practice and theory and podcast listeners may be offered a discount code for $50 off of the ticket by entering in the code 'podcast' on the Event Brite page, which we will link to in the show notes. Thank you, Elrick. Thank you, Joe. Thank you everybody and we will see you next week.

Listen to our podcast:

Listen on Apple Podcasts