Transparent Development

Hosted byCharles Lowell and Taras Mankovski

September 26th, 2019.

In this episode, Charles and Taras discuss "transparent development" and why it's not only beneficial to development teams, but to their clients as well.


Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build them right.

It's been a long summer and we're back. I'm actually back living in Austin, Texas again. I think there wasn't too much margin in terms of time to record anything or do much else besides kind of hang on for survival. We've been really, really busy over the last couple of months, especially on the professional side. Frontside has been doing some pretty extraordinary things, some pretty interesting things. So, we've got a lot to chew on, a lot to talk about.

TARAS: There's so much stuff to talk about and it's hard to know where to start.

CHARLES: So, we'll be a little bit rambly, a little bit focused. We'll call it fambly. But I think one of the key points that is crystallized in our minds, I would say over this summer, is something that binds the way that we work together. Every once in a while, you do some work, you do some work, you do some work, and then all of a sudden you realize that a theme, there's something thematic to that work and it bubbles up to the surface and you kind of organically perceive an abstraction over the way that you work. I think we've hit that. I hit one of those points at least because one of the things that's very important for us is -- and if you know us, this is things that we talk about, things that we work on -- we will go into a project and set up the deployment on the very first day. Make sure that there is an entire pipeline, making sure that there is a test suite, making sure that there are preview applications. And this is kind of the mode that we've been working in, I mean, for years and years and years. And where you say like if what it takes is spending the first month of a project setting up your entire delivery and showcasing pipeline then that's the most important work, inverting the order and saying that has to really come before any code can come before it. And I don't know that we've ever had like a kind of unifying theme for all of those practices. I mean, we've talked about it in terms of saving money, in terms of ensuring quality, in terms of making sure that something is good for five or 10 years, like this is the way to do it. And I think those are definitely the outcomes that we're looking for. But I think we've kind of identified what the actual mode is for all of that. Is that fair to say?

TARAS: Yeah, I think one of the things I've always thought about for a long time is the context within which decisions are made because it's not always easy. And it's sometimes really difficult to really give it a name, like getting to a point where you have really clear understanding of what is it that is guiding all of your actions. What is it that's making you do things? Like why do we put a month of work before we even start doing any work? Why do we put this in our contract? Why do we have a conversation with every client and say, "Look, before we start doing anything, we're going to put CI in place." Why are we blocking our business on doing this piece? It's actually kind of crazy that from a business perspective, it's a little bit crazy that you be like, "Oh, so you're willing to lose a client because the client doesn't want you to set up a CI process?" Or in a case of many clients, it's like you're not willing to accept -- the client is going to say, "We want to use Jenkins." And what we've done in the past, in almost every engagement, we're like, "Actually, no. We're not going to use Jenkins because we know that it's going to take so long for you to put Jenkins in place. By the time that we finish the project, you're probably still not going to have it in place. That means that we're not going to be able to rely on our CI process and we're not going to be able to rely on testing until you're finished." We're not going to have any of those things while we're doing development. But why are we doing all this stuff? It was actually not really apparent until very recently because they didn't really had a name to describe what is it about this tooling and all of these things that makes why is it so important to us. I think that's what kind of crystallized. And the way that I know that it's crystallized because now that we're talking to our clients about it, our clients are taking on to picking up the language. We don't have to convince people that this is a value. It just comes out of their mouth. Like it actually comes out of their mouth as a solution to completely unrelated problems, but they recognize how this particular thing is actually a solution in that particular circumstance as well even though it's not something Frontside sold in that particular situation. Do you want to announce what it actually is?

CHARLES: Sure. Drum roll please [makes drum roll sound]. Not to get too hokey, but it's something that we're calling Transparent Development. What it means is having radical transparency throughout the entire development process, from the planning to the design, to the actual coding and to the releases. Everything about your process. The measure by which you can evaluate it is how transparent is this process to not just developers but other stakeholders, designers or people who are very developer adjacent, engineering managers all the way up to C level executives. How transparent is your development? And one of the ways that we sell this, because I think as we talk about how we arrived at this concept, we can see how this practice actually is a mode of thinking that guides you towards each one of these individual practice. It guides you towards continuous integration. It guides you towards testing. It guides you towards the continuous deployment. It guides you towards the continuous release in preview. I think the most important thing is that it's guided us, by capturing this concept, it's guided us to adopt new practices, which we did not have before. That's where the proof is in the pudding is if you can take an idea and it shows you things that you hadn't previously thought before.

I think there's a fantastic example. I actually saw it at Clojure/conj in 2016, there was a talk on juggling. And one of the things that they talked about was up until I think it was the early 80's or maybe it was the early 60's, the state of juggling was you knew a bunch of tricks and you practice the tricks and you get these hard tricks. And that was what juggling was, is you practice these things. It was very satisfying and it had been like that for several millennia. But these guys in the Physics department were juggling enthusiasts and I don't know how the conversation came about, you'd have to watch the talk. It's really a great talk. But what they do is they make a writing system, a nomenclature system for systematizing juggling tricks, so they can describe any juggling trick with this abstract notation. And the surprising outcome, or not so surprising outcome, is that by then, once you have it in the notation, you can manipulate the notation to reveal new tricks that nobody had thought of before. But you're like, "Ah, by capturing the timing and the height and the hand and we can actually understand the fundamental concept, and so now we can recombine it in ways that weren't seen before." That actually opened up, I think an order of magnitude of new tricks that people just had not conceived of before because they did not know that they existed.

And so, I think that's really, as an abstract concept, is a great yardstick by which to measure any idea. Yes, this idea very neatly explains the phenomenon with which I'm already familiar, but does the idea guide me towards things with which I have no concept of their existence? But because the idea predicts their existence, I know it must be there and I know where to look for it. And aha, there it is. It's like shining a light. And so I think that that's kind of the proof in the pudding. So that's a little bit of a tangent, but I think that's why we're so excited about this. And I think it's why we think it's a good idea.

TARAS: Yes. So what's also been interesting for me is how universal it is. Because the question of is this transparent enough? That question could be actually asked in many cases. What's been interesting for me is that asking that question in different contexts that I didn't expect actually yielded better outcome. At the end of the day, I think that a test for any idea is like, is it something that can help you more frequently than not? Like is it actually leading you? Does applying this pattern, does it increase the chances of success? And that's one of the things that we've seen, thinking about just practices that we're putting into place and quite asking are they transparent enough? Is this transparent enough? It's actually been really effective. Do you want to talk about some of the things that we've put in place in regards to transparency? Like what it actually looks like?

CHARLES: Yeah. I think this originally started when we were setting up a CI pipeline for a native application which is not something that we've typically done in the past. Over the last, I would say, 10 years, most of our work has been on the web. And so, when we're asked to essentially take responsibility for how a native application is going to be delivered, one of the first things that we asked kind of out of habit and out of just the way that we operate is how are we going to deliver this? How are we going to test it? How are we going to integrate it? All the things that we've just talked about is something that we have to do naturally. But because this is not very -- like continuous integration and build is very prevalent on the web. I think that testing still has a lot of progress on the web, but it's far more prevalent than it is in other communities, certainly the native community. So when we started spending a month setting up continuous integration an integration test suite, spending time working on simulators so that we could simulate Bluetooth, having an automated process with which we could ship to the App Store, all of these things kind of existed as one-offs in the native development community. There are a lot of native developers who do these things. But because it's not as prevalent and because it was new to us, it caused a lot of self reflection both on why is it that we feel compelled to do this. And also we had to express this, we had to really justify this work to existing native development teams and existing stakeholders who were responsible for the outcomes of these native development teams. So, there was this period of self reflection, like we had to write down and be transparent about why we were doing this.

TARAS: Yeah. We had to describe that in SoWs. We actually had really long write ups about like what it is that we're setting up. And for a while, it was, people I think would read these SoWs and I think they would get the what's of what we're actually going to be putting into place. But it wasn't until we actually put it into place and we've seen a few like really big wins with the setup -- one of the first ones was the setting up preview apps where preview apps in -- the web are pretty straightforward because we've got Netlify that just kind of gives it to you easily.

CHARLES: Netlify and Heroku. It's very common.

TARAS: Yeah, you activate it, it's there. But on the mobile side, it's quite a different story because you can't just spin up a mobile device that is available through the web. It's something kind of very special. And so we did find a service called Appetize that does this. And so we hooked up the CI pipeline to show preview apps in pull requests. So for every pull request, you could see specifically what was introduced in our pull request without having to pull down source code, like compile it. You could just click a link and you see a MVC stream of a mobile device and that application running on a mobile device. So the setup took a little bit of time. But once we actually put it in place and we showed it to our clients, one of the things that we noticed is that it became a topic of conversation. Like, "Oh, preview apps are amazing." "This is so cool." "Preview apps are really great." And I think in some ways, it actually surprised us because we knew that they were great, but I think it was one of the first times that we encountered a situation where we would show something to a client and they just loved it. And it wasn't an app feature. It was a CI feature. It was part of a development process.

CHARLES: Right. So, the question is then why was this so revelatory? Why was it so inspiring to them? And I think that the reason is that even if we have an agile process and we're on two week iterations, one week iterations, whatever, there's still a macroscopic waterfall process going on because essentially, your business people, your design people, maybe some of your engineering people are involved at the very front of the conversation. And there's a lot of talking and everybody's on the same page. And then we start introducing coding cycles. And like I said, even if we're working on two week iterations and we're "agile", the only feedback that you actually have, whether something is working, is if the coder says it's done. "I'm Done with this feature. I'm on to the next feature for the next two weeks." And after that two weeks, it's like, "I'm done with this feature. I'm on to the next feature." From the initial design, you have the expectation about what's going on in the non-technical stakeholders minds. They have this expectation. And then they hope that through the process of this agile iterative development cycles, they will get the outcome that satisfies that hope. But they're not able to actually perceive and put their hands on it. It's only the engineers and maybe some really tech savvy engineering managers who can actually perceive it. And so they're getting information secondhand. "Hey, We've got authentication working and we can see this screen and that screen." And, "Hey, it works on iOS now." "I have some fix ups that I need to do on Android." So, maybe they're consuming all of their information through standups or something like that, which is better than nothing. That is a level of transparency. But the problem is then you get to actually releasing the app or whether it's on the web, whether it's on native, but this is really a problem on native. You get to where you actually release the app and then everybody gets to perceive the way the app as it actually is. So you have this expectation and this hope that was set maybe months prior and it just comes absolutely careening into reality in an instant, the very first moment that you open the app when it's been released. And if it doesn't meet that expectation, that's when you get disappointment. When expectations are out of sync and grossly out of sync with reality, even a little bit out of sync with reality, you get disappointment. As fundamental and explanation of just the phenomenon of disappointment, but it's an explanation of why disappointment happens so often on development projects. Is this kind of the expectations and hopes of what a system can be in the minds of the stakeholders? It's kind of this probability cloud that collapses to a single point in an instant.

TARAS: And that's when things really hit the proverbial fan. Now, you have the opposite. So everything that was not transparent about your development process. So everything that was hidden in the opaqueness of your development process, all of those problems, either on a product side, maybe something didn't quite get implemented the way it's supposed to. Like you actually found out two weeks or three weeks before you're supposed to release that that feature wasn't actually quite implemented right. It went through testing, but it was tested against the Jira stories that were maybe not quite written correctly. So the product people are going like, "What the hell is this? It's actually not what I signed up for. This is not what I was asking for." So, there's that part.

And then on the development side, you've got all of the little problems that you didn't really account for because you haven't been shipping to production from day one. You actually have like application not really quite working right. You didn't account as supposed to integrate with some system that is using Chorus or something you didn't account for. Like you have a third party dependency you didn't really fully understand. But because it wasn't until you actually turned it on that you actually started to talk to the thing properly, and you realize there's some mismatch that is not quite working. But now you've got everything that was not transparent about the development process, everything that was hiding in the opaque corners of your development process is now your problem for the next three weeks because you've got to fix all of these problems before you release. And that's what I think where a lot of organizations kind of find themselves in is this position where they've been operating for six months and like, "Everything is going great!" And then three months or three weeks before, you're like, "Actually, this is not really what we were supposed to do. Why did this happen?" That time is really tough.

CHARLES: Yeah. That's what we call crunch time. And it's actually something that is lot of times we think of it as inevitable, but in fact it is actually an artifact of an opaque process.

TARAS: Yeah.

CHARLES: That's the time when we have to go, everybody's like, "We're ordering pizza and Dr. Pepper and nobody's leaving for a month."

TARAS: Yeah. I think there are people that do that practice like functional testing as part of development process or acceptance testing, I think they could relate to this in some cases where if you had to set up a test suite on an application that was written without a test suite, first thing you deal with are all the problems that you didn't even know were there. And it's not until you actually start testing, like doing functional testing, not integration or unit testing where you're testing everything in isolation, but when you're perceiving the entire system as one big system and you're testing each one of those things as the user would, it's not until that point you start to notice all the weird problems that you have. Like your views are re-rendering more than you expected. You have things that are being rendered you didn't even notice because it's hard to see, because it happens so quickly. But in test, it happens at a different pace. And so, there's all these problems that you start to observe the moment that you start doing acceptance testing, but you don't see them otherwise. And so, it's the process of making something transparent that actually highlights all these problems. But for the most part, if you don't know that there are transparent options available, you actually never realize that you are having these problems until you are in crunch time.

CHARLES: Right. And what's interesting is viewed through that lens, your test suite is a tool for perception. And to provide that transparency, not necessarily something that ensures quality, but ensuring the quality is a side effect of being able to perceive bugs as they happen or perceive integration issues at the soonest possible juncture. To shine the light, so to speak, rather than to act as a filter. It's a subtle distinction, but I think it's an important one.

TARAS: About functional testing and acceptance testing. I think one of the things that I know personally from experience working with comprehensive acceptance test suites is that there is certainty that you get by expressing the behavior of the application in your tests. And I think what that certainty does is it replaces hope as opposed to having hope baked into your system where you think like you're hoping. I think for many people, they don't even perceive it as hope. They perceive it as reality. They see it as, "My application works this way." But really what's happening is there's a lot of trust that's built into that where you have to say like, "Yeah, I believe the system should work because I wrote it and I'm good. And it should not be broken." But unless you have a mechanism that actually verifies this and actually insures this is the case, you are operating in the area of dreams and hopes and wishes, and not necessarily reality. And I think that's one of the things that's different. A lot of the processes around highlighting or shining light on the opaque areas of the development process. And it's actually not even just development process. It's actually the business process of running a development organization. Shining light in those areas is then what gives you the opportunity to replace hope with real validatable truth about your circumstances.

CHARLES: And making it so that anyone can answer that question and discover that truth and discover that reality for themselves. So, generating the artifacts, putting them out there, and then letting anybody be the primary perceiver of what that artifact means in the context of the business, not just developers. And so, that kind of really explains preview apps quite neatly, doesn't it? Here we've done some work. We are proposing a change. What are the artifacts that explain the ramifications of this change? So we run the test suite. That's one of the artifacts that explains and radiates the information so that people can be their own primary source. And look at it in a developer centric, although you can tell, any old person can tell if the test suite's failing, it's not a change that we should go with. But the preview app is something we take this hypothetical change, we build it, we put it out there and now, everyone can perceive it. And so, it calibrates the perception of reality and it eliminates hope. Which is like if your development process is based on hope, you are signing yourself up for disaster. I like what you said that it implies a huge amount of trust in the development team. And you know what? If you have a cracked development team, that trust is earned and people will continually invest based on that trust. But the fundamentals are still fragile because they still can open up a crack between the expectation and the reality. And the problem is when that happens, the trust is destroyed. And it's during that crunch time, if it does happen that you lose credibility and it's not because you became a worse developer. It's not because your team is like lower performing, it's just that there was this divergence allowed to open. But then the problem is that really lowers the trust and that means that unfortunately that's going to have a negative knock on effect. And reasonably so. Because if you're an engineering manager or a product manager, you're something like this and you're losing trust in your development team and their ability to deliver what you talked about, then you're going to want to micromanage them more. The natural inclination is to try and be very defensive and interventionist and you might actually introduce a set of practices that inhibit the development cycle even further and lower the team's abilities to perform right when they need to do it the most, then you end up destroying more trust.

TARAS: Yeah, it's a spiraling effect I think because it's in the process of trying to make things better. And then you start to introduce practices. Like maybe you're going to have meetings every day reviewing outstanding stories to try to get everybody on the same page, but now you're micromanaging development team. The development team starts to resent that and now you've got this like people hating their job. It starts to get messier and dirtier and more complicated. And the root cause of that is that from day one, there was a lot of just [inaudible] about getting into it and just starting to write some code but what you didn't actually do is you didn't put in place the fundamentals of making sure that you can all observe a reality that is honest. And I think that kind of fundamental principle, it's interesting how when you actually start to kind of take this idea and when you start to think about it in different use cases, it actually tells you a lot about what's going on and you can actually use it to design new solutions.

One of the things that Frontside does, I don't know if those who've kind of worked with us before might know this or might not, but we don't do blended rates anymore. Because we don't actually, one of the challenges with blended rates is that they hide the new ones that gives you the power to choose how to solve a problem.

CHARLES: Yeah. There's a whole blog post that needs to be written on why blended rates are absolute poison for a consultancy. But this is the principle of why.

TARAS: Yeah. I think it's poison for transparent consultancy because if you want to get the benefits of transparency, you have to be transparent about your people. Because alternatively what happens is that you start off relying on your company's reputation and then there is a kind of inherent lie in the way that the price points are set up because everybody knows that there is going to be a few senior people, there's going to be a few intermediate people, a few junior people. But these exact ratios of those or who is doing what, how much people are available, all of those things are kind of hidden inside of the consulting company so that they can manage their resources internally. And so what that does is it simplifies your communication with the client. But actually what it also does is it disempowers you to have certain difficult conversations when you need the most. And you could say, "Look, for this kind of work, we don't need to have more senior people working on this." We can have someone who is junior who is at like $100 an hour, $75 an hour as opposed to being $200 or $250 an hour. We can have that person working on this and we can actually very clearly define how certain work gets solved. It requires more work. But then what it does is it creates a really strong bond of honesty and transparency between you and your clients. And it gives you a way, like now the client starts to think about you as a resource that allows them to fulfill on their obligations in a very actionable way. They can now think about how they can use you as a resource to solve their problems. They don't need a filter that will process that and try to make it work within the organization. You essentially kind of become one unit. And I think that sense of unity is the fundamental piece that keeps consulting companies and clients glued together. It's the sense of like, "We can rely on this team to give us exactly what we want when we need it, and sometimes give us what we need that we don't know we need." But that bond is there. And that bond is strong because there is no lie in that relationship. You're very transparent about who are the people that's working on it. What are they actually going to be doing? How much is this costing us?

CHARLES: It's worth calling out explicitly whether on the flip side of it is, is if you have a blended rate, which is the way that Frontside operated for, gosh, pretty much forever, is that people will naturally calibrate towards your most senior people. If I'm going to be paying $200 an hour across the board, or $150 an hour across the board, or $300 across the board, whatever the price point is, they're going to want to extract the most value for that one price point. And so, it means that they're going to expect the most senior people and become resentful if what I'm paying is $300 for a task. If I've got five senior people, it's a better deal for me. For the same price to get five senior people than two senior people to a medium level people and one junior person. And so, it has two terrible effects. One is that they don't appreciate the senior people to be like, "Hey actually, these are people with extraordinary experience, extraordinary knowledge, extraordinary capability that will kick start your part." So they are under appreciated and then they're extremely resentful of the junior people. It's like, "I'm paying the same rate for this very senior person as I am for this junior person? Get this person off my project." But if you say, "You know what, we're going to charge a fifth of the cost for this junior person and we're going to utilize them," then you're providing real value and they're appreciating it. They're like, "Oh, thank you for saving me so much money. We've got this task that does not require your most senior person. That would be a misallocation of funds. I'd be wasting money on them. But if you can charge me less and give me this junior person and they're going to do just as competent a job, but it's going to cost me a fifth of the money, then that's great. Thank you." So, it flips the conversation from 'get this god-damn junior person off my project' to 'thank you so much for bringing this person on'. It's so critical. But that's what that transparency can provide. It can totally turn a feeling of resentment into gratitude.

TARAS: What's interesting is from business perspective, you make the same amount of money. In some cases, you actually make more money. I think in that way, it's a consulting company. But that's not the important part because the amount of value that's generated from having more granular visibility into what's happening is so much greater. It's kind of like with testing where any of those things where when you start to put, when you start to shine light on these kind of opaque areas and then you start to kind of flush out the gremlins that are hiding there, what you then start to do, what you kind of discover is this opportunity to have relationships with clients that are honest. So you could say, for example, like one of the things that we've done recently is we actually have like 10-tier price point model, which allows us to to be really flexible about the kind of people that we introduce. So, there's a lot of details that go into the actual contracting negotiation. But what it does is it allows us to be very honest about the costs and work together with our clients, like actually really find a solution that's going to work really well for them. And then this is kind of a starting point when we start thinking about transparency in this kind of diverse way, you actually start to realize that there are additional benefits that you might have never been experienced before. One of the things that we found recently is that one of the initiatives that we kind of launched with one of our clients is we wanted to bring together, there's a general problem that exists in large projects, which is that if you have a really big company and you have like, let's say 20 or 30 interconnected services, your data domain, like the older data, kinds of data you work with is spread over a whole bunch of microservices spread over potentially a bunch of different development teams spread over a bunch of different locations. What usually has happened in the past is each one of those problems or the domain, the data domain has been kind of siloed into a specific application. We worked with a bank in the past and that being had for every, they had 80 countries. In each country they had 12 different industries, like insurance and mortgage and different kinds of areas of services they offered. And then for each of the country, for each of the service, they had a different application that provided that functionality. Then the next step is, let's not do that anymore because we now have something like 100, 150 apps, let's bring it all together under a single umbrella and let's create a single shared domain that we can then use. And so, a GraphQL becomes a great solution for that. But the problem is that making that change is crazy complicated because the people on the business side who understand how all the pieces fit together. On the other side, you have the developers who know where the data can come from and how to make all that real. And on the other side is there's like frontend implementers who actually build in the UIs that are consuming all these services.

On a project that we're working on right now is we're building a federated GraphQL gateway layer that is kind of connecting all these things, bringing all these things together. But the problem is that without very specific tooling to enable that kind of coming together of the frontend, the backend, the business people having coming together, creating a single point of conversation and having a single point of reference for all the different data that we have to work with and different data that is available to the gateway, without having something like that, without having that transparency in the actual data model, it is really difficult to make progress because you don't have shared terminology, you don't have shared understanding of the scope of the problem. There's a lot of dots in context that needs to be connected. And for anyone who has worked with enterprise, you know how big these problems get. And so what we've done on a project that we're working on now is we actually aimed to bring transparency to this process. What we actually did is put in place, start to build an application that brings together all of the federated services into a visualization that different parties can be involved in. And so I think one of the kind of common patterns that we see with transparency in general is that we are including people in the process, in the development process that were previously not included. So in the past, they would be involved in the process either very early on or very late in the process, but they wouldn't be involved along the way. And so what this kind of transparency practice actually does is it allows us to kind of democratize and flatten the process of creating foundations for pieces that touch many different parts of the organization. And so this tool that we created allows everyone to be involved in the process of creating the data model that spans the entire organization and then have a single point of reference that everybody can go to and have a process for contributing to it. They don't have to be a developer. There's developers who consume it. There are business people that consume it. There are data modeling people that consume it. Like there's different people parties involved. But the end result is that everyone is on the same page about what it is that they're creating. And we're seeing the same kind of response as we saw with preview apps where people who previously didn't really have an opinion on development practices or how something gets built, all of a sudden they're participating in the conversation and actually making really valuable suggestions that developers couldn't really have exposure to previously because developers often don't have the context necessary to understand why something gets implemented in a particular way.

CHARLES: Something beautiful to behold, really. And like I said, it's wonderful when a simple concept reveals things that had lay hidden before.

TARAS: Yeah. It's a very interesting lens to look at things through. How transparent is this and how can we make it more transparent? I think asking that question and answering that question is what has been kind of giving us a lot of -- it had been very helpful in understanding our challenges in the work that we do on a daily basis and also in understanding how we could actually make it better.

CHARLES: I apply this concept in action on my pull requests. I've really been focusing on trying to make sure that if you look at my pull request, before you actually look at the code, you can pretty much understand what I've done before you even look at the diff. The hallmark of a good pull request is basically if by reading the conversation, you understand what the implementation is going to be. There's not really any surprises there. It's actually hard to achieve that. Same thing with git history. Spending a lot of time trying to think like how can I provide the most transparent git history? That doesn't necessarily mean exactly the log of what happened moment to moment, day to day, but making sure that your history presents a clear story of how the application has evolved. And sometimes that involves a lot of rebasing and merging and branch management.

I think another area that has been new for us, which this has revealed those things that I just described are areas where we're kind of re-evaluating already accepted principles against a new measure, but introducing an RFC process to actually a client project where we're making architectural decisions with our developers, the client's developers, external consultants. You've got a lot of different parties, all of whom need to be on the same page about the architectural decisions that you've made. Why are we doing this this way? Why are we doing modals this way? Why are we using this style system? Why are we using routing in this way? Why are we doing testing like this? These are decisions that are usually made in an ad hoc basis to satisfy an immediate need. It's like, "Hey, we need to do state management. Let's bring in Redux or let's bring in MobX or let's bring in whatever." And you want to hire experts to help you make that best ad hoc decision? Well, not really. I mean, you want to lean on their experience to make the best decision. But having a way of recording and saying this is the rationale for a decision that we are about to make to fulfill a need. And then having a record of that and putting it down in the book so that anybody who can come later. First of all, when the discussion is happening, everybody can understand the process that's going on in the development team's head. And then afterwards and it's particularly important is someone asks a question, "Why is this thing this way?" You can point directly to an RFC. And this is something that we picked up from the Ember community, but this is something that open source projects really by their very nature have to operate in a very highly transparent manner. And so, it's no surprise that that process came from the internet and an open source project. But it's been remarkably effective, I would say, in achieving consensus and making sure that people are satisfied with decisions, especially if they come on afterwards, after they've been made.

TARAS: We actually have this particular benefit that could experience that particular benefit today where one of the other things that this RFC process and transparency with the architecture, how that kind of benefits the development organization is that a lot of times when you are knee deep in doing some implementation, that is not a time you want to be having architectural conversations. In the same way like in a big football team, they huddle up before they go on a field. You can't be talking strategy and architecture and plans while you're on the football field. You have to be ready to play. And this is one of the things that the RFC process does is it allows us to say, "Look, right now we have a process for managing architecture so that with the RFC process you can go review our accepted RFCs. You can make proposals there." And that is a different process than the process that we're involved in on a daily basis, which is writing application, using architecture where we have in place. And so that in itself can be really helpful because well intentioned people can bring up these conversations because they really are trying to solve a problem, but that might not be the best time. And so having that kind of process in place and being transparent about how architecture decisions are made allows everyone to participate and it also allows you to prioritize conversations.

CHARLES: Yeah. And that wasn't a practice that we had adopted previous to this, but it's something that seemed obvious that we should be doing. It's like, how can we make our architecture more transparent? Well, let's do this practice. So, I keep harping on this. But I think it's the hallmark of a good idea if it leads you to new practices and new tools. And we're actually thinking about adopting the RFC process for all of our internal developments, for maintaining our open source libraries.

TARAS: There is something that we've been working on that we're really excited about. So, there's a lot of stuff happening at Frontside. But one of the things that we've been doing is working on something we call the transparent node publishing process, which is something that I think we originally drew inspiration from the way NativeScript has their repo set up. But one thing that's really cool about how they have things set up is that every pull request automatically is available, like everything is available. Very quickly, a pull request is available for you to play with and you can actually put it into your application as a published version in npm and actually see exactly if that pull request is going to work for you. You don't have to jump through hoops. You don't have to clone the repo, build it locally, link it. You don't have to do any of that stuff because if you see a pull request that has something that you want but then is not available in master, there's an instruction on the pull request that tells you, "Here's how you can install this particular version from npm." And so you essentially you're publishing. Every pull request automatically gets published to npm and you can just download and install that specific version for that particular pull request in your project. That in itself I think is one of those things I suspect that is going to be talked about. It actually can alleviate a lot of problems that we have on a development processes because like the availability of the work of people who are participating in the project, there is kind of a built in barrier that we are essentially breaking down with this transparent node publishing process. And so, that's something that we're very close to to having it all on our repos and we're going to try it out and then hopefully share it with everyone on the internet.

CHARLES: I didn't know that the NativeScript did this. I thought that the idea that came from it is like how can we apply these transparency principles to the way we maintain npm packages. The entire release process should be completely transparent, so that when I make a pull request, it's available immediately in a comment. And then furthermore, even when a pull request is merged, there's no separate step of let's get someone to publish it. It's just now it's on master. Now it is available as a production release. You close the latency and you close the gap and people perceive things as they are. There is nothing like, "Oh that emerged. When do I get this?" This is something that I can't stand about using public packages is you have some issue, you find out that someone also has had this issue, they've submitted a pull request for it and then it's impossible to find if there's a version and what version actually supports this? And it's even more complex between projects that actually do backporting of fixes to other versions. So I might be on version two of a project. Version three is the most recent one, but I can't upgrade to version three because I might be dependent on some version two APIs. But I really need this fix. Well, has it been backported? I don't know. Maybe upgrading is what I have to do, but maybe downgrading. Or if I'm on the same major release version, maybe there's been 10 pull requests, but there's been no release to npm. And it can be shockingly difficult to find out if something is even publicly available. And the transparency principle comes in to, "Hey, if I see it on GitHub, if I see it there, then there's something there that I can touch and I can perceive for myself to see if my issue has been resolved or if the things work as I expect."

TARAS: I'm really excited about this. I'm really excited about this kind of clarification, this crystallization of transparency. And I'm also seeing our clients starting to apply it to solving problems within their organization as well is very inspiring.

CHARLES: Yeah, it is. It is really exciting. And honestly, I feel like we've got one of those little triangular sticks that people use to find water. I feel like we have a divination stick. And I'm really excited to see what new tools and practices it actually predicts and leads us to.

TARAS: Me too. I'm really excited to hear if anyone likes this idea. Send us a tweet and let us know what you're seeing for yourself about this because I think it's a really interesting topic and I'm sure there's going to be a lot that people can do with this idea in general.

CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside, or over just plain old email at contact@frontside.io. Thanks and see you next time.

Listen to our podcast:

Listen on Apple Podcasts