John sits down with Paul Frend and Brian Lonsdorf (aka Dr Boolean) to talk about functional programming concepts like transducers.
In this episode John sits down to talk to Paul Frend and Brian Lonsdorf (aka Dr Boolean) about functional programming, and its practical use cases on the job. Paul has released a new course on egghead.io covering the topic of transducers that is the spark for this conversation.
Transducers are a a useful pattern that can deliver performance and readability, but are often misunderstood or obscure to many programmers. Along with transducers you'll learn more about monoids, folds, lenses and so much more.
"Functional JavaScript with Paul Frend and Brian Lonsdorf (Dr Boolean)" Transcript
John Lindquist: This is an Egghead.io podcast on 'Transducers,' with Paul and Brian. So welcome, everyone, let's start by introducing ourselves. We've got Brian and Paul on the call, so, you know what? Paul, if you want to go first and introduce yourself, your experience, where you're from, all that?
Paul Frend: Sure. Thanks, John. It's good to be here. So, I'm from Sweden, or Stockholm, Sweden, originally.
John Lindquist: Nice.
Paul Frend: And I'm now living in Brisbane, Australia. I've been here for about ten years. I've been working as a developer slash software engineer, whatever you call it nowadays ...
John Lindquist: Yeah.
Paul Frend: ... for around ten years. Mainly working with single-page applications, sorta started in in-house CMS management, then moved on to 'digital agency land,' where I work mainly in front and back end applications.
Paul Frend: And I'm now working for a startup called Console Connect. I'm just building a product that automates network connections off the public internet, [twin-data 00:01:10] centers ... layer two, layer three connections. Those type of things, but I mainly work on the front end, there. A little bit on the back end, so mainly work in react and load. And ...
John Lindquist: Okay. That sounds like a fun problem to solve.
Paul Frend: Yeah, it's very interesting. It's a lot of novel problems that haven't been solved, I guess.
John Lindquist: Yeah, cool.
Paul Frend: In terms of my style, I'm not a hardcore functional programmer. I use both object-oriented and functional programming concepts. I don't try and be very opinionated, just whatever gets the problem done, type of thing.
John Lindquist: Yeah? So you're a mature programmer, then. Is that what ... ?
Paul Frend: Yeah.
John Lindquist: Have you ever seen that graph of how you accept your own programming style over time? Or ...
Paul Frend: Maybe. Yeah, I don't know if I would ever call myself 'mature,' but ...
John Lindquist: Whatever gets it done.
Paul Frend: Yeah, exactly. It's probably just too much pain in my past, of ...
John Lindquist: Right.
Paul Frend: ... trying to stick to a specific type of solution. You end up accepting something that works, that isn't too bad, and moving on.
John Lindquist: Amen. Yeah. All right, Brian, how about you?
Brian Lonsdorf: Hey! So, I'm Brian. I'm trying to think about ... I know I know the answer to this question ... I've been writing programs for ten years or so. Yeah, about like ... first five years was 'OO,' and then the last five have been hardcore 'FP,' I suppose. But, yeah. It's been ... just ... I get really into exploring things and totally agree that there's never one 'right' solution. But I'm still in my 'vast ocean' of, "How do I figure out how to work in a profunctor today?"
John Lindquist: Yeah.
Brian Lonsdorf: So, it's the way I am right now. But anyway, I've been programming for a while. I work at Salesforce now, on tooling and other ... actually a design system, trying to do things like prove properties for all Salesforce designs and whatnot. It's pretty fun.
John Lindquist: Cool. I always enjoy working on tooling, where you're using buttons to make buttons.
Brian Lonsdorf: Yeah.
John Lindquist: You're writing code to make more, yeah.
Brian Lonsdorf: Yeah, it's trippy that way. And also, we're transcending Ui's these days, with chat and whatnot, so, it gets interesting.
John Lindquist: Are you doing any voice recognition ... style ...
Brian Lonsdorf: As a matter of fact, yeah, we're trying to do some stuff with that at Salesforce, now. And transducers are a pretty interesting fit for that topic, so ...
John Lindquist: A transducer? Well, what is that? Thanks for the segue!
Brian Lonsdorf: Yeah, you know. So, I guess ... in the ... voice control and chat, a lot of it is state machines. You can kinda model things with state machines really well, and get into substates, and kinda keep going down these ... this graph of transformations. And you have finite state transducers, and I think that's where this kinda comes from, but this podcast is probably more about the JavaScript transduction stuff, where you have a pipeline of functions ... and you can ... it's like a streaming fold.
John Lindquist: Yeah.
Brian Lonsdorf: So, I don't know. I bet you Paul has a much better definition.
John Lindquist: Yeah. Hit us, Paul. Enlighten us.
Paul Frend: No, I like that.
John Lindquist: What is a trans-
Paul Frend: He can elaborate on that. Where ... how ... using that with the state machines on Salesforce, as well, later on. That would be really interesting. But yeah, from my understanding of it, a ... well, the best description I've seen of a transducer is a composable perform or transform that doesn't create intermediary collections between operations. So, like Brian said, being able to compose transforms together and then you have the reducer as the underlying operation that makes sure you only [need to rake 00:05:04] through your collection once. And it also decouples you from the collection type that you're working on, so ... those transforms don't care if you're working on array, or objects, or generators or whatever it is you want to operate on, in terms of the collection.
John Lindquist: Right.
Paul Frend: There ... I'll stick to that.
John Lindquist: I was thinking, earlier today, about this question, "If a transducer uses transforms, does a reducer use reforms?" I just sat there and I was like, "Wow."
Brian Lonsdorf: Word play. I like it.
John Lindquist: I think there's something to it, right? Or if a reducer reduces, can you 'duce' something? But then that took my mind in a completely different direction.
Brian Lonsdorf: I'm just 'duce-ing' it again!
Paul Frend: Yeah, that's another lesson.
John Lindquist: Yeah, a reducer that does one pass is just 'duce-ing' it, right?
Brian Lonsdorf: That's ... exactly. There's an old joke in category theory, because if you say 'co,' it's the opposite. So you have like a ... you know ... monad and a comonad ...
John Lindquist: Yeah.
Brian Lonsdorf: So the joke is 'Cocoa Puffs' are just puffs.
John Lindquist: Whoa. Mind blown. Yeah.
Brian Lonsdorf: But yeah, one more definition of a transducer I wanted to throw out there is, you can think of them as fold transformers, because they take a fold from ... if you're thinking at the type level, a fold, or if you think of a fold as a reducer ... takes our accumulator and some 'x'. Returns the new accumulator, if you compose it, you'll just change that 'x,' not the accumulator. So, it's like a pre-composition of a fold, which is just a fold transform. Kinda fun.
John Lindquist: Yeah.
Paul Frend: Nice.
John Lindquist: Yeah, no, I like both of those. I think ... I really like Paul's, just because it ... I watched his course on Egghead just recently, and it lined up with everything he said and it was all fresh in my mind.
Brian Lonsdorf: Yeah.
John Lindquist: So, what exactly ... what problem does a transducer solve? Why should we know about them? Open question.
Paul Frend: I'll go first, with the simple answer, because I'm sure Brian has a more complex one.
John Lindquist: No pressure.
Paul Frend: Well, the instant benefit, which is like the one you grab first at, I think, when you first come across them, is the performance. And that you don't induce a memory penalty by building up all these intermediary connections. So, if you think of operating on an array and using built-in operations on that. So, my collection .map, .filter, might do another map and whatever other operations, you know, you've defined. Every time you call that, it's gonna iterate through your whole array and create another collection that then gets passed to the next operation.
Paul Frend: Which, if you're new to JavaScript, might not be very clear, as you start chaining these things together. That you're actually creating all of these intermediary collections. So, the transducer will ... since we're just reducing over the collection once ... will skip all that. And that also gives you the benefit of short-circuiting that operation that you've composed, since you've only been through ... or, I'll rephrase that. Since you've composed transform, every element in the array, as soon as you've gone through one of the elements, it's gone through all of your transforms.
Paul Frend: That means you can stop whenever you want. There's no operation that's waiting, further down the line.
Brian Lonsdorf: Yeah. I would have to add to that wonderful explanation ... if you're kinda championing the declarative approach and more of a high-level programming approach where you capture patterns of loops in these functions that are reusable and composable, this ... enables performance, first and foremost, in a language like JavaScript that doesn't automatically do loop fusion via the compiler. So, in functional language, you do maps and filter ... it's just gonna fuse them together, and you really don't need a transducer for that. It just does that. Right?
Brian Lonsdorf: But yeah, as it enables you to have a different type ... your output type, that you're building, as separate from your input type ... it gives you that, but something that distinguishes it from just folds, which, you can make folds compose quite easily. And, actually, you can write a transduce that works across folds. And folds are really, really powerful. They let you ... it's like you think of a reducer with one extra clean-up function and a tally function at the end? That's what I mean when I say, 'fold.'
Brian Lonsdorf: And what you get is the ability to combine multiple folds together in one pass, you can do stuff like streaming averages or ... you could extend folds, all sorts of stuff. But transducers have a more powerful element to them in that you can break. You can actually just stop whenever you want. Folds cannot do that. I mean, they'll still run. I guess you can ... you can't short-circuit it or have mutable state. Well, I guess you can do mutable state in JavaScript ... or Haskell or something.
John Lindquist: Yeah. So ... [crosstalk 00:10:23] ... with that definition ... with those definitions ... so, at what point do you start thinking the transducer is the solution, versus, "I wrote a map, then I did a filter. Okay, did another map. This one might be expensive. Then chain on another filter, and maybe another reduce. Well, maybe ..." Do you start cringing every time you chain on something else and you chain on another reduce? Or is it ... do you think, "Wow, we have a lot of data here. We have to think of performance, first thing."
John Lindquist: How do you ... when do you think, "This is the ... let's reach for a transducer now."
Brian Lonsdorf: I can just extend what I was just saying, in that I will immediately reach for folds. Particularly something they call 'beautiful folds.' It's just a type-
John Lindquist: Ooh.
Brian Lonsdorf: Yeah, it's beautiful. And-
John Lindquist: It makes you think of those people who can fold t-shirts really well. You watch a .gif of it ...
Brian Lonsdorf: Yeah. It's the crease. It's because they're using starch in their wash, I know it!
John Lindquist: Oh, okay. Beautiful folds? C'mon, Sir.
Brian Lonsdorf: KonMari style, you roll it ... but, so, the thing is there's a few different tools in the belt, right? You can reach for really heavy-handed recursion schemes, which, they do allow you to short-circuit with Elgot algebras and do all sorts of crazy stuff, but most of the time you don't need that. And it's a lot of work for, you know, if you don't need it. Same thing with a lens library. Lenses, or optics in general, can do all this stuff too, with traversals and folds and [inaudible 00:12:02], you know, kinda make these streaming compositions.
Brian Lonsdorf: But transducers have this sweet spot where, you're writing map, filter, whatever, after each other and you haven't really brought a library in. And you do need that boost in either composability or performance, or both, and you can bring that in if you're not ... you don't need anything more than those, right? Whereas if you start to get ... if you're like, "I want to do something that returns a streaming average or some kind of multi-pass loop." It's ... or do two loops at once and things like that. Transducers have this really, really easy, lightweight solution to solve that problem without having to learn some serious library.
John Lindquist: The way you think about transducers and composability makes them ... once you learn them, makes them really easy to think about. Or to ... makes them really easy to solve complex problems. So, if you've ever used a transducer before and you take the time ... take ten, fifteen minutes to dive into them, you can ... it can be a good extra tool in your belt to think, "Oh, that's that multi-pass problem I'm gonna have to solve with whatever." And ... with ... the transducer is an easy way of solving that, so I can just compose these three things together and done. Versus, you know, recursion makes my brain hurt and anything else. So. All right.
Brian Lonsdorf: What do you think, Paul?
John Lindquist: What do you think, Paul?
Paul Frend: Yeah, I really like that explanation. No, I agree and I'll add to that, just to clarify, as well. Coming from the JavaScript side, I guess as soon as you want to bring in more complex containers and using other libraries to support the type of operations that you explained, that is instantly an overhead in additional libraries, so I think ... I like the fact that you're using ... you can get away with just using built-in JavaScript to build up the pattern of a transducer.
Paul Frend: And, especially if you're on a big team that might not be that functional, everyone doesn't even have to understand what they do, because they read like normal built-in transforms.
John Lindquist: Yeah. Composed functions, yeah.
Paul Frend: Yeah. Exactly. It's a good intro to composition, I think.
Brian Lonsdorf: [Scrap your for 00:14:21] loops.
Paul Frend: Yeah, yeah.
John Lindquist: I was just going to say, I've been working with my niece. It's her first ... she's never done any programming before, and I'm looking at every ... all these programming books on JavaScript and things, to introduce her to JavaScript. And ... all the for loops in there and all the state being mutated and these things ... I'm like, "How do I get her from zero to composing a couple functions together?" Versus-
Brian Lonsdorf: Lambda calculus.
John Lindquist: Sure. But I think there's another ... and that's something I've been writing is like this other path of going from zero to composable functions and [gareen 00:15:05] and things like that. Where you introduce those early, because they are simple and easy concepts, just as easy as changing something or mutating some property or whatever. Anyway, that's a huge tangent. But I think going from zero to learning a transducer, if you don't have any other mutable logic polluting your mind, is a fairly easy path, but ...
Brian Lonsdorf: Yeah, I totally agree with you on that. In fact, so ... when I think of iteration, or the iterator protocol, and I'm like, "Okay." You know, when you go to solve a problem, you're like, "Well, I need to do something to these collections, so I'm gonna take ..." Write the code that says, "Give me each one." And then I'm gonna change it and I'm gonna make sure I perform some operation on each thing, and then I get this new thing back ... You don't really ... that's not how you begin thinking about this stuff.
Brian Lonsdorf: You can just say, "Well, I have an array. I want to map it to this new array." I just wanna ... it's totally a mapping from one to the other, and that is really natural to think about ...
John Lindquist: Yeah.
Brian Lonsdorf: ... up front, then the ... but as soon as you start thinking in terms of iteration in a mutable sense, then you start to lose all that beautiful abstract thinking that you kinda just, would be natural, for sure.
Paul Frend: Exactly.
John Lindquist: But one last thought on that, is the ... if you look at code.org and the little exercises that I've had my kids walk through, where you drag little blocks of code around ... move, rotate, jump and all these? And they make a chain of functions and attach them together, I'm like, "These kids are dragging little blocks of code and composing functions into a stream that will execute one after the other, with the same input." I'm like, "That's the very first thing they're learning." But in ... a lot of people for JavaScript land, that's ... you don't learn how to compose functions until years into your career. I ... anyway ...
Brian Lonsdorf: Yeah, after you've had the pain ... actually, I watched this study ... it was somebody who did this long study, and what they came up with is the hardest program problem in programming, which is not 'how to name something.' But people are like, "Well, I see this variable right here. Why can't I have it down here?" They're looking at the same page, or the same class, or the same module or whatever, and they just don't understand 'scope,' in general. So ... and understanding scope in different languages, different contexts and that's really complex, but you just remove that whole problem altogether if you just have this pipeline of transformations, because you know what you have available. It's [passed in 00:17:40].
Paul Frend: It's funny how you have to learn abstraction. You learn abstraction, naturally, first. Then you have to forget all about it. Then you have to learn how to extract yourself from the inner-workings of something you really don't care about to then be able to apply all the abstraction logic again. It's like this weird circle, right?
Brian Lonsdorf: Yeah.
John Lindquist: I love that. It's funny how sometimes a beginner will ask you, "How do I do this one thing this way?" And I'm just like, "I don't remember how to do it that way." I understand your question and I remember approaching a problem like that before, but you just hadn't thought that way in a long time. Anyway, that was a fun tangent. Well yeah, let's kinda circle back to how you're using transducers in everyday stuff right now with what you're working on.
Brian Lonsdorf: Well Paul's working on a video series on Egghead as a ... everyday ...
Paul Frend: Yeah, that's [crosstalk 00:18:41] my everyday involvement at the moment. Yeah, I haven't actually used them much in production, apart from replacing importing maps and filters and other collection operations from Lodash and replacing it with transducer calls. And, if anyone's interested in trying it out, I think that's a good place to start. Go into your application and find your 'black box' place that does the transformations. And just have it play and see what it feels like, because, it will read nicely and it's probably not going to upset other developers. I hope.
John Lindquist: Yeah, I think that's great advice.
Brian Lonsdorf: Oh, they'll be upset. And if it does, you can be like, "Well, if you wrote all these with loops and then you came back and tried to refactor this, it would be a nightmare. But if it's a streaming composition, you can just move everything around, like Legos. And it's really easy to refactor." Probably the easiest selling point for that is you don't lose any of the speed, the perception of speed from a for loop, right? So you're ... they don't really have ground to stand on, except for like, "I don't like things that are different."
Paul Frend: Yeah, good point. Yeah, if you're battling against the programming that's saying, "Don't use your maps and filters because of performance reasons." Well, now you can show them this and they should go quiet.
Brian Lonsdorf: But yeah, as far as usage goes ... and yeah, I feel comfortable using this at work, as well. It's not ... sometimes I go a little overboard, so I'll use some abstraction. Like, just when I was learning Gang of Four, right? I'd use every single design pattern ever. For no reason. It doesn't ... this isn't gonna work, but I'm gonna just shove it in here anyway, because I really want to use this.
John Lindquist: Like, I just command pattern the heck out of everything.
Brian Lonsdorf: Yeah, that's the best one. That and interpreter. Those are the two. Anyway ...
John Lindquist: And I'll find excuses for visitor because I just like the concept of ...
Brian Lonsdorf: Hey! Visitor, turns out, is an F-algebra, which is a fold.
John Lindquist: Hey!
Brian Lonsdorf: Which is an interesting thing, yeah! You're passing a function into the object-
John Lindquist: Maybe that's why I liked it so much.
Brian Lonsdorf: Yeah, there you go. So functional programming inherently gives you this inversion of control, right? Everything kind of gets pushed to the caller and given to the functions, the higher-order functions that you're using at the caller. The visitor just flips that on its head, really enables that, by letting you jump into the method.
Brian Lonsdorf: But, in any case, there's a cool ... if you google F-algebra and visitor pattern you'll see a nice little proof of it. At work we do a lot of DOM stuff and, if you're traversing a DOM tree, it is a great place to do this kind of stuff. Because you might want to be filtering things and mapping things and ... especially if you just do a querySelector*, get yourself a list.
John Lindquist: Yeah.
Brian Lonsdorf: You can start kinda doing all sorts of fun stuff with transducers without losing any speed. And I was just doing something like this for our CSS ... what we were trying to do with our design system is, say ... we have this thing, kinda like Bootstrap, but we say for all of Salesforce UX, if you're using an input, it should have these properties, it should have this accessibility on it, and ... so we're just kinda consuming the DOM and accumulating values and flipping switches, like flags. Like saying, "Oh. We have a violation here."
Brian Lonsdorf: And it's amazing for that kind of thing. You can start with a list of DOM notes and build up a tree or a data structure, a map, whatever you'd like with the output ... of like error messages or whatever. So, good stuff there.
John Lindquist: Yeah. For sure. Did you say you were using it as part of some voice recognition ... ?
Brian Lonsdorf: Oh yeah, so ... that's an interesting concept. So it turns out a fold is the same ... is isomorphic, or equivalent, to a Moore machine, which is a state machine. Kind of think of it as an infinite recursive reducer kind of thing ...
John Lindquist: Okay.
Brian Lonsdorf: ... that can return a reducer, the same reducer back or a new reducer back. So you kinda get into these new states as you go. But if ...
John Lindquist: I'm nodding my head like I understand what you're saying. This is fine. This is my audio nod.
Brian Lonsdorf: Well ... see ... so you've used Redux, right?
John Lindquist: Yeah.
Brian Lonsdorf: Okay, so if you have a reducer that's for your ... that says 'login,' right? Or it's got your ... so that reducer returns itself at the end, along with state.
John Lindquist: Okay.
Brian Lonsdorf: And this is the Moore machine version of it, is that it could return the new state, gets that state and the payload, and returns the new state. But instead of just returning the new state, it also returns ... actually, returns it ... type holding that new state and the next reducer, whether that's going to be itself ... so, if the login was wrong, it would just keep returning itself. Or if it was right, it could return the application reducer. Now that you've logged in, this is the next reducer that's gonna happen.
Brian Lonsdorf: So you kind of have this continuation that kinda gets itself into different states. And you can do the same thing with voice control, right? Saying, "Every time it hits my reducer, I'm gonna say ..." if someone's trying to ... let's say we have a To-Do app and they want to add a 'to-do,' it would reduce ... it would do the switch on the action. Like, "I'd like to add." And it'd say, "Okay, boom! I'm gonna return the 'adding to-do reducer'." Right?
Brian Lonsdorf: And that's like ... puts your application in a new state, looping there. Unless ... if you cancel, you go back. And, otherwise, you can transition or you can keep accumulating the text. And you do cool stuff with real voice recognition that way, too. Taking each phonetic sound to the next graph [crosstalk 00:24:36]. Well, you think it'd be a graph.
John Lindquist: Yeah.
Brian Lonsdorf: Yeah, it'll just take each path. And there's all sorts of cool stuff. But with transducers, which is a more Mealy machine version of state machines, you do a state transducer where each node in your state machine, which is a state or text, is connected via a path. And those paths can have weights, so maybe something's more probable than another path, or perhaps if they took this path we want to remember something.
Brian Lonsdorf: And so, each of those paths that you take has an extra value associated and we can accumulate the paths along the way, and the values of those paths. And run transformations over these accumulators, right? It's just a streaming fold. So, it's a really interesting way to model your application and have this really composable pipeline of transformations, without too much overhead. It sounds really complex, it's not that complex.
John Lindquist: Yeah, I can see where you're going with it. It'd be fun to sit down and read through some of that.
Brian Lonsdorf: Yeah, definitely. I'm obsessed with it now.
John Lindquist: Yeah.
Brian Lonsdorf: I actually might even be trying to put together some Egghead videos. Maybe, maybe not.
John Lindquist: We'll see.
Brian Lonsdorf: Put a robot thing on my voice. Make everybody angry.
Paul Frend: And my brain is officially empty from that.
Brian Lonsdorf: Food for thought, you can take any non-deterministic action and turn it into a deterministic action, just by doing all the non-deterministic stuff. So, if you're doing voice, you can't necessarily map intent, right? And so if someone's like, "I want to add a to-do." And maybe they didn't actually mean that. Maybe they were like, "I want to add something." And they wanted to add a setting, or whatever.
Brian Lonsdorf: So you can take all the paths out of that node and do all the computation, and just keep whittling it down. And that is more efficient, usually, than backtracking and starting again. So essentially, what you can do is, especially with voice recognition, you can take all the phonetic sounds and just compute each step along the way, until we realize ... we have enough information to understand what he actually meant and we'll just throw away the rest.
Paul Frend: Like a constantly evaluating stream to decide which action to take next.
Brian Lonsdorf: Yeah, yeah. But it'd be like multiple stream paths, all at the same time.
Paul Frend: Yeah, very cool.
John Lindquist: That reminds me of some A* path stuff I did, back when I was programming games. We were trying to figure out the quickest way from point A to point B-
Paul Frend: That's exactly where my mind went, as well. It's like, "How do I get to Canada?" Or whatever it was.
John Lindquist: Yeah.
Paul Frend: Find the quickest path to Canada.
Brian Lonsdorf: Yeah, no. That's totally right. And there's really great papers about using rings and semi-rings over graphs to accumulate it and stuff. Dextrous algorithms and ... stuff that you don't really ... sounds really intense, but it's like, "Isn't that these two functions, right?"
John Lindquist: Right.
Brian Lonsdorf: [CONCAT and CONCAT 00:27:30]. You're like, "Oh, okay."
John Lindquist: Yeah, but they use a lot of words to say that?
Brian Lonsdorf: Yeah, yeah. It sounds real fancy.
John Lindquist: Yeah.
Brian Lonsdorf: Before that thought is up, I do want to say one thing. At work, if you are using these kind of folds and stuff and doing things like that, you should really look into monoids because they capture accumulation so well. And reducing is ... can be formulated with a monoid really nicely, so ... I'm not sure ... Anybody who cares can go google monoid, but it's the best. Because, essentially, what you do is take ... you can take an accumulator and some 'x', right? Your reducer takes the accumulator and some value ...
John Lindquist: Yep.
Brian Lonsdorf: ... and returns the new accumulator. Well, if you flip the two arguments around, it takes the value, then the accumulator. Returns the new accumulator. And the function that takes one type to the next type, accumulator to accumulator, is called Endo, and it can be infinitely composed because it never changes types. And essentially, what you can do is ... so if you just flip the arguments around, you get this really nice type that allows you to infinitely compose anything, which is kind of amazing. So, check out monoids for that. Sorry. Little plug.
Paul Frend: Have you got ... you've got a section on monoids, don't you?
John Lindquist: No, I mean ... [crosstalk 00:28:45] ...
Paul Frend: ... guide to functional programming? If anyone's listening and you haven't read that ...
Brian Lonsdorf: I do.
Paul Frend: ... you should check that out. Make sure it's in our show notes.
Brian Lonsdorf: Yeah.
Paul Frend: That's awesome.
Brian Lonsdorf: Yeah, there's a ... yeah, there ... it even shows Endo in there. And you can formulate a reducer with Endo and then you could ... you get composition through [concatenation 00:29:03]. And then you can do all sorts of stuff that you can't do with the arguments the way they are.
Paul Frend: Nice.
John Lindquist: Tell you what, Man. Just changing arguments around, sometimes, opens whole new worlds of programming you didn't know about before. And it's so weird how that happens.
Brian Lonsdorf: Seriously.
John Lindquist: Just flipping arguments. Like, "What's this brand new thing that you did?" Well, I took the first argument and I made it the second one. "Whoa!" That's what a lot of these functional programming concepts are. But ...
Brian Lonsdorf: Really. Really, truly. And there's a whole theory behind it. It's wild.
John Lindquist: Yeah. All right. Let's see. What else have we got on my list, here? So, performance. We've been talking about performance. You guys have any recommended performance-measuring tools you use? I'm sure people would be interested in that. How are you measuring performance at work? And how does ... how do you know if something's fast enough?
Paul Frend: Well, I think the first thing you've gotta do is define what's fast enough. I've been caught a few times in briefs that say that something's too slow, but not explaining what adequately fast is, if that makes sense. You've gotta have a goal first, to make sure you know when to stop. 'Cause otherwise you can lose a lot of time on it. I'm by no means a performance guru or anything, but, that's one lesson it did me.
Paul Frend: And, for me, I've got no real stories. I just use built-in browser, Chrome DevTools has gotten really far in its abilities to measure performance, I think. And apart from that, it's just timing functions with different input, see how they scale.
John Lindquist: Are you satisfied with what the Chrome DevTools offers, as far as the performance analytics?
Paul Frend: Yeah. For my purposes they've been fine.
John Lindquist: Okay. Cool.
Brian Lonsdorf: Yeah, I'm kind of in the same boat. I've been programming for a while and I've run into some awful situations, but it's very few and far between, right? And it's to the point where people are writing ... they're just scared, right? They're writing stuff that's just horribly unmaintainable because they're afraid of it being slow.
John Lindquist: Yeah.
Brian Lonsdorf: And it becomes this much bigger problem than performance ever was. It's more of the boogeyman of performance is always looming around the corner. It's gonna ruin your day. But ... we use ... it turns out Jest is an amazing tool for that.
John Lindquist: Really?
Brian Lonsdorf: Yeah, if you have a ... if you write applications, like I do, which I'm sure you do, you probably want to write a lot of your ... as much of your code, as much as possible, outside of whatever framework, whatever you're using. Keep it in a nice, pure model. So that, you know ... even if it's react, which is great, get it out and keep it in a model. And so, if you're testing all this stuff, and you're writing these nice tests for your code, Jest tells you how long it took to run.
Brian Lonsdorf: It'll also cache after the first time, so you really get a ... I don't want to say reasonably accurate, but you get a test every time that you run your test. Like a speed test, just kinda for free, by how long it took to run the test suite. And you can just write, tell it to target one test or whatever. And ... so if you're doing something like I was just doing the other day, which was we were doing this whole comparison thing. It took a really long time, the way I wrote it.
Brian Lonsdorf: I wrote it with monoids, funny enough. Turned out to be really slow because I was-
John Lindquist: Wait a second. [crosstalk 00:32:40].
Brian Lonsdorf: And this is not the normal case. But it's not the normal case, I was creating hundreds and hundreds of objects, because the way you can do this is you make a ... I made a little factory function as my monoid, so I could implement CONCAT. And I didn't make a class because I hate classes. No, I just didn't feel like it.
John Lindquist: [crosstalk 00:33:01] Tell us more.
Brian Lonsdorf: Yeah, I didn't feel like going through the ceremony of making a whole class. I made a little factory function and you run it through and ... who knows what the JIT is gonna do, right? It doesn't ... you don't know if it's gonna be fast or slow. But just by having a Jest test, I was able to acquire a full second of my test suite by just switching to a class. And I was like, "Oh, I see. I see why this is important." But ... it's pretty useful.
John Lindquist: So classes are faster. Class all the things. Is what you're saying.
Brian Lonsdorf: Yeah, exactly. The performance boogeyman is gonna get you. You have to write classes.
John Lindquist: You know if you're making-
Brian Lonsdorf: There can be thousands of options-
John Lindquist: If you make that rhyme somehow, it makes it more believable.
Brian Lonsdorf: Right.
John Lindquist: I've heard studies that people will accept it as true if it rhymes, so ...
Brian Lonsdorf: Oh, I like that. Yeah. So ...
John Lindquist: So work on that.
Brian Lonsdorf: I like it. No, but for real, I was making hundreds of objects, thousands of objects. You would want a class if you're doing that. That's an optimization on that exact thing, right? Where I was taking each ... so many things and putting them in objects. And the other ... you know, thing that we use, of course, is some internal Salesforce-like thing they call SOLE that runs through the page and tells you the bootstrap of the page and certain Ui rendering things, and ... Yeah, we just spent weeks on IE11. Oh my gosh. My arm's tired.
Paul Frend: Really?
Brian Lonsdorf: Yeah, so I much prefer to write this stuff in a pure model and just use Jest to kind of give me ... I mean, if we're talking about transducers and for loops, you don't really need to care about ...
John Lindquist: Yeah. I know Paul and I have both been playing with Wallaby.js as a measurement tool. It does ... it'll do ... you can just add comments to specific /?* something type comment that will execute the block of code underneath it, or in front of it. And can do an inline performance measurement and show you how many milliseconds it took. It can also ... you can also feed some arguments into it, execute it and see what the outputs are, to kinda do a pre-unit test, of sorts.
Paul Frend: Yeah, that was very [crosstalk 00:35:14] exploratory. Really changed how I programmed. I really liked it. 'Cause you just get that instant feedback and you can sort of [sign 00:35:22] by, not necessarily for performance, but by just having the output of the function logged out straight away. It's like ... I really liked what you said, John. It's like your pre-unit test.
John Lindquist: Yeah, it's really a ... it's a very friendly, easy inline, surprising ... in a good way ...
Brian Lonsdorf: That's awesome.
John Lindquist: Yeah, way of testing your code or showing what it does. And I've really liked it so far, but it's one of those small projects that I hope I can trust what it's telling me, sort of thing.
Brian Lonsdorf: [inaudible 00:35:54].
Paul Frend: Yeah, exactly. It is funny though, just talking about performance issues. While recording this course, one of the things I tried was just that. Is timing these operations with millions of records, but I couldn't find a place ... the setting for it. Maybe there is one, hidden somewhere that ... you can't time the function just once, so every time you save or write some new code, it'll execute all of those comments. So it'd be like write a bit of code, save, and then it would run all the functions that had time and comments against it. And I would sit and wait, there, for ten seconds.
John Lindquist: Oh, so it's like a cache flag or something?
Paul Frend: Yeah, exactly. That would be really cool.
John Lindquist: Well, let's send him a message on Twitter, just like every other bug report I've ever filed.
Brian Lonsdorf: Fix it now! But you know, I do want to mention one cool trick you can do that's very similar to that is ... if you have a unit test suite, there's a library ... there's a bunch of libraries out there that do a style called QuickCheck, which will check that a property holds for any input. And what you do is, you have these little, composable reverse scaler or whatever. Basic data types that generate stuff, so bool would be true or false, right?
Brian Lonsdorf: And string would just be this infinite amount of random strings that are arbitrarily created. And then you can build up bigger data types from those smaller data types. But basically, you say, "Check that this property holds for all ... of the type of input that my function takes." And if you're doing functional programming, it kinda opens up that possibility. So, stupid example is to say, "For all reverse ..." If I reverse an array and reverse it again, I should get the same array back. But it'll throw out arrays with bignums in there, and crazy stuff in there, empty arrays and whatever.
Brian Lonsdorf: And so, that's another cool way to get this generative testing. Jsverify is a good library for that ... and well, it plugs right into Mocha, or Tape, or Jest or whatever, so ...
John Lindquist: Cool.
Paul Frend: Awesome.
John Lindquist: Cool, well thanks, guys. Is there anything else you want to add before we just kinda wrap up here?
Brian Lonsdorf: Let's see ...
John Lindquist: Just open ...
Brian Lonsdorf: Please, please don't try to optimize your code until someone tells you you have to. I just want to throw that out there.
John Lindquist: That's excellent advice. I've been on a project where there was one developer who could not stop optimizing things, and they never got anything done. And it brought down the team, you know? It hindered a lot of progress, because their task was to make this thing smooth, this infinite list of sorts, kinda smooth really ... sorry, scroll really smoothly. And load data smoothly and all these things. And they just could not get it done. Yeah. Very upsetting.
Paul Frend: Yeah, what is a good closing statement here?
John Lindquist: I should have prompted you guys on a closing statement, sorry.
Paul Frend: It's like in front of the jury.
John Lindquist: Think of something really smart to say.
Paul Frend: That's why I'm nervous right now.
Brian Lonsdorf: Closing statement! Yeah, well, if you're interested in transducers and you're enjoying them and want to take it a step further, check out lenses, which has the power of transducers, but even more. So, you can kinda start with this really lightweight library and if you're enjoying the composability and the ability to traverse stuff, if you look into optics ... there's different libraries available, but that's a cool next step if you need more power.
Paul Frend: Have you got any examples of library names for optics?
Brian Lonsdorf: Let's see, I think there's flunc optics? Is that ... is that ... no, it's Flunc is the guy? Or no, flunc is the library. F-l-u-n-c optics. Done by Scott Christopher, who's also, I think, in Australia?
Paul Frend: Oh really?
Brian Lonsdorf: Yeah, he's in Sydney.
John Lindquist: I've used partial.lenses. Have you seen that library? It's part of a bigger suite around the React streaming variation called Calm.
Brian Lonsdorf: Oh no. I'm looking at it right now, though. Power of the internet.
John Lindquist: Yep. Partial.lenses is my favorite lens library right now. But it's one of those where I feel like I'm one of twelve people using it, so ... I don't ... I try to stay on the common, code normal path. Otherwise ... I don't want to be the lone nomad out in the middle of nowhere who has his own way of thinking.
Brian Lonsdorf: Right, right. And this is definitely ... yeah, this definitely does what I promised. It will take transducers to the next level.
Paul Frend: Wicked.
John Lindquist: Yeah, it's ... I ... maybe I recommend against looking into that ... if you're not ...
Brian Lonsdorf: Yeah, this is definitely way too much. This is the sledgehammer bazooka. A 'bazooka that shoots sledgehammers' solution.
John Lindquist: Yeah, but it ... the lenses support traversals, and then the other lenses ... libraries I looked at didn't, and ... anyway, that's all.
Paul Frend: Okay, I've got my closing statement.
John Lindquist: All right, Paul. Paul, what's your smart thing?
Paul Frend: I'll get my five seconds of language. I guess I would finish with ... if you're gonna optimize, if you're not gonna optimize for performance, try and optimize for the relationship between your code. So, transducers is one way to do that by decoupling those transforms and setting up pipelines. You can use ... they work together, it's very easy to create code that focuses on the verbs, or things, and not so much on the relationships. So, transducers is one way to let you name how things relate together, as well. So, I'll end on that.
John Lindquist: I like that. More verbs, less [inaudible 00:41:48].
Paul Frend: Yeah.
Brian Lonsdorf: Gerunds.
John Lindquist: All right, we'll end there. I'm gonna hit the big, red 'stop recording' button. Thanks for listening. You can learn more about transducers from Paul, and more about functional programming from Brian, on Egghead.io.