<![CDATA[console.blog]]>https://log.rdl.phRSS for NodeWed, 22 Mar 2023 07:22:35 GMT<![CDATA[We are on the Cusp of a New Golden Age of Indie Games]]>

I've "tried" to learn game development at various points in the past, between the bones of a tower defense game in pure C in college (you know what, it was actually not awful, in my recollection) to some little exercises in PhaserJS.

The consistent feeling I had, though, was that the entire process was geared toward some end result(™). The process itself was cumbersome and frustrating, only buoyed by some future reward: "Eventually there will be a game here. One day."
There is undoubtedly value in practicing delayed gratification; not everything good happens immediately. But the differential between starting a game development project in the past and getting something "fun" (or at least impressive) was very large, and that amount of gap is where burnout happens.

The Indie Game Resurgence

It probably seems like indie games had their heyday in the early to mid 2010s. 2011 was a banner year, bringing us Minecraft, The Binding of Isaac, and Terraria. Fez was released in 2012. Shovel Knight came out in 2014, then Undertale and Ori and the Blind Forest the next year. 2016, while light on acclaimed indie releases did bring us the heavy-hitter Stardew Valley. 2017 closed out the "golden age of indie games" with two wildly popular crowd-pleasers with Hollow Knight and Cuphead.
What have we gotten since?
Among Us? Hades?
Great games, but a trickle of good indie games is not indicative of a healthy ecosystem where smaller, more experimental (or innovative) games can thrive.

But, maybe it's not about survival. Maybe it's about how much effort it takes to build a game. In those days, developing a game could actually make sense: the appetite for new indies was ravenous, so the financial upside was huge. The success (and eventual purchase) of a game like Minecraft showed that a single dev with enough development chops could actually make a fortune selling a game they built in their free time. But the effort it took was also astronomical. Minecraft was Java; I'll leave that there. Game engines at the time were obtuse (as far as I can tell) and inflexible. Even a game as massive as Stardew Valley ported their game to a new engine - in 2021 - because the old one was holding them back. An engine port is not something to be taken lightly, so we can assume that the older engines were cumbersome.
So, while the development was more difficult, the upside was worth it. In the late 2010s, though, the appetite was sated. Much has been written about the river of flaming garbage being pumped out by the likes of Steam Greenlight, but I think it's fair to say that consumers are more cautious than before, which means games need to deliver on promises, have some hook that makes them truly fun, and they need to be affordable (a definition of affordable is essentially impossible to summarize, but everyone has some definition given a set of circumstances).

I think that we are on the cusp of a new golden age of indie games.
Unity is just friendly enough that new developers can pick it up with a little bit of initial struggle. Even Unreal is getting more and more friendly to new-comers.
But the real thrust of new game development is relatively new engines like MonoGame, Construct, and Godot. Construct is a notable exception here, in that is has a subscription and a proprietary license. MonoGame and Godot, however, are open source and have extremely friendly licenses. Perhaps even more importantly, an engine like Godot is democratizing game development in an easy, consumable way.
Working your way through the Godot tutorial is just a matter of a few hours, zero dollars, and - maybe most importantly - it leaves you with foundational knowledge you can build on for your next step. You're not just copying and pasting code, you're understanding what it is that you're doing.

And this is why in the next 3-5 years, we're going to see a new glut of high-quality indie games. Not because of a massive shift in end-user appetite (although there is an argument to be made that nostalgia for days gone by is whetting a new appetite), but because modern tooling has brought the effort and investment in building a game back in line with what you can expect to get out of it.

I posted on Mastodon about my own experience with starting to learn Godot, and the basic takeaway is this: I have never felt like the learning process was the intrinsically fun part of game development. That the actual development part could be the fun part - thanks in large part to Godot - is a monumental shift.
I don't expect indie games to take over from the Calls of Duties and Defensive Structures in the Evening, but I do expect to see a steady stream of high-quality indie games over the next few years, and I - for one - am here for it.

]]>
https://log.rdl.ph/post/we-are-on-the-cusp-of-a-new-golden-age-of-indie-games.html1lsg6bdFNpD7faYawG0nb8Mon, 06 Mar 2023 21:31:00 GMT
<![CDATA[New Year, New Me? My Resolutions for 2023]]>

Better late than never

It's getting nearer to the end of January, so it seems like a good time to actually write down what I want to accomplish in 2023.

One of the key things that I typically incorporate into my "resolutions" - if you can call them that - is a reaction to things I didn't like from last year. So here's what I want to change about my life in 2023.

I resolve to:

Read more books

This one... uh... this one won't be that hard. Last year I read two books and I even had to cheat and say I finished the second one in 2022 (although let's be honest the rule is that the day isn't over until you go to sleep).

I've found that I feel guilty if I don't finish a book. I'm not sure where it comes from, but take it from me (I'm mostly talking to myself here): There are so many books. So many. Way more than you could ever read. And some books are bad! Actually lots of books are bad! It's possible that the majority of books are bad, actually. It's okay to not like a book and just... quit it.

I think it will also help if I give myself a license to read whatever I want. Last year, I told myself I would read one "fun" (maybe fiction) book and then a more academic book, and then back to fun, etc.
This sets me up to get stuck on a boring academic book, set it down, and never pick up another book for the rest of the year. After all, I didn't finish my veggies, so I can't have my dessert.
This is absolute bollocks and I'm giving myself - and you, reader! - permission to just read whatever. Read a good book, who gives a shit what it is as long as you like it?
Did you go to my bookshelf link up above? The first book I read this year was Jurassic Park. Banger book. Dinosaurs go hard. So many people got eaten. Read it in like two days. It was fun! It's okay to just have fun!

Finish more video games

Oh, I hear you: "but Thomas," you whinge, "playing more video games isn't a positive life change."
First of all, nobody said resolutions need to be positive changes.
More importantly, however, the evidence that playing video games has wide-reaching and substantial positive impact on logic, cognition, socialability, motor skills, academic performance, and on and on has been piling up for decades. It's actually a perfectly fine pastime.
Most importantly, my resolution is to finish more video games. Not play (necessarily), finish.

I find that my general approach to video games is either casual ("I have a few minutes"), creative ("I want to build something"), or completionist ("all of the collectibles in this game seems achievable!").
None of these approaches lend themselves to finishing high-quality, narrative games. Not that this necessarily falls into the high-quality or narrative categories, but I've been stuck at something like 90% completed on Doom Eternal because I missed a single collectible on one of the last levels and the idea of going back and replaying the whole level just so that I can get that one item makes me never want to touch it again.
I know! I put myself into this position! I'm not happy about it!
So the goal is to finish more games, which will take some planning:

  • Intentionally limit myself to shorter games.

    I've found that huge, long games tend to get boring anyway. More is - quite often - not better. Tell me a story with a great set of mechanics and maybe a little innovation in 15-25 hours and I'll be perfectly happy.

  • Allow myself to not experience every possible piece of content in a game.

    I suspect that my completionist streak comes from a very long time ago - a much more formative time - when I was a gamer, but also very broke. Buying a game was a much riskier venture, so I had to get everything possible out of it.
    I don't have the time or energy for that level of neuroticism any more, and I have quite a lot more money, too.
    Basically, take it down a notch.

Use my own game tracker exclusively

This is a bit of a sleeper resolution. See, I've been using Backloggd for a bit to track video games, but it's got some bits that are too complicated, and some odd parts, and some things that I think really could be a lot better.
So I started building my own.
It's actually going fine, thanks.
It's going fine, but slowly. So this resolution is to attempt to guilt myself into increasing the pace just ever so slightly. Can I get a version 1 in 2023? Sure, I can. Can I do it with time to spare? Well, yes, I could. I just have to get motivated.
It's hard to build complicated web apps when there's absolutely no promise of a pay-off (financial) or even niche popularity. In all likelihood, this app will be exclusively for me. All by myself.
But!
I resolve to use it exclusively by the end of this year, which means that it will be "v1 feature complete." And version 1 is a pretty good app, if I do say so myself. So ultimately, the resolution here is "get to v1 on the game tracking app." But that's a bit clichéd, if you ask me: "I'm going to work on my side project!"
Sure, pal.
Maybe this way I can feel like it's an accomplishment!


So there we are. That's what I want to try to do better in 2023. If I don't do it, that's okay, too!

]]>
https://log.rdl.ph/post/new-year-new-me-my-resolutions-for-2023.htmlqfEYPbxSzxCYXwwTDvLBTSat, 21 Jan 2023 06:42:00 GMT
<![CDATA[Game Tracker devlog #1]]>

A few months ago, I started working on a new web application.

A commit graph showing many commits in August and September of 2022, but only a few in October.
My commits were pretty strong - almost daily - in August and September. All on this new app.

As with most apps I start building, this new one is to scratch my own itch. I've been using video game tracking (and movie tracking) sites for a long time, but I've never found a video game tracker that felt like the tool I wanted to use.

When I started, I had all of the early-project drive, and I got a lot done (I have 51 cards in my "Done" column!). But then, I came to the first problem: a feature that would allow the user to write their own data back to the application store.

No, I didn't get stuck on content moderation, I got stuck on the ability to submit a star rating.

ENCAPSULATION!

To be clear: I didn't actually get stuck on making a star rating. What I got stuck on is the idea - which I believe in strongly - that UI components should not make network requests.

Of course, network requests have to happen somewhere, so my approach for a long time has been to abstract those type of things (and every other kind of "not UI" action, too) out to another place.
This is super beneficial, because I can generalize any behavior, and I can exercise an event-driven architecture. However, it does have the one downside that I got stuck on: If I want to build an event-driven system and keep UI components clean, how far should I take that?

After all, if it's events all the way up, then I could catch every single event in exactly one place and branch to triggering actual behaviors from there, right?
But, the problem (of course) is that those UI components need to react to events occurring. Obviously, this problem has been solved in the most over-engineered way possible with Flux libraries like Redux and Vuex. You publish an action or you use a getter, and you go through a whole store update cycle, and then that data flows back down into the component. You've heard the story.

I have written - at length - about Flux in the past. As noted in that post, there are good parts to Flux. Namely, the directory full of actions that define how the application behaves.

Where I get stuck, though, is when I have to put this into practice. I don't actually want to abstract everything away from the components. I'm also absolutely loathe to put "business logic" and network calls into a UI element.
Where I'm at - with an architecture like Fluxbus - feels like the best of both worlds: I'm boiling behaviors like data access, network calls, and more to a single event action that can be observed by any actor in the entire application. The component only issues those actions and they are handled elsewhere ("business logic", essentially). I'm not depending on a global object or magical variables just like how the VDOM libraries have done it - badly - for years.
But, I still don't like it.

So that's how I got stuck submitting a numeric star rating on an app I'm working on.

Ultimately - like my refreshed approach to blogging ("just do it and worry about cleaning it up later") - I'm going to just forge ahead and deal with a refactor later.

My concern is not that I may have to refactor, it's that I can't think of the next improvement on top of this to remove all of the annoying bits. I feel nearly paralyzed by imposter syndrome.


Links I've been pondering

]]>
https://log.rdl.ph/post/game-tracker-devlog-1.html4Q9HXvKVlnWlDETD1qVKcvFri, 18 Nov 2022 09:35:00 GMT
<![CDATA[A Good Code Review]]>

If you've worked in software development for any amount of time, you've probably encountered The Code Review. This process is typical in almost every development organization.

What is code review?

The short explanation of what "code review" means is this: Someone other than the author looks at code changes and makes suggestions before the changes go into the product.

Why is code review?

Having experienced code review at many different companies and while interacting with open source projects with many different reviewers, I have whittled down my idea of what a code review should be to a few simple rules.

Every single comment on a code review should address a failure of one of the following guidelines:

  1. Is this code effective in accomplishing the stated goal of the changes?
  2. Is this code obviously correct in execution (e.g. apparently defect-free)?
  3. Is this code free of major security or performance issues?

Every code review comment should clearly address some problem with one of the three items above. If a comment does not, it should be explicitly self-identified as both non-blocking and an "opinion," "thought," or "concern." Sometimes an "opinion" can manifest as a "suggestion," but these are explicitly non-blocking, and should be self-identified as such.

If a review does not address any of these issues, but also fails to self-identify as non-blocking opinions, consider that what the review does - in fact - is arbitrarily gatekeep software changes.

The only meaningful review is one that catches implementation issues ("You said this does X, but in reality it does Y, which is similar, but not quite right."), or security flaws ("This would allow a malicious user to expose our database structure."). Reviews that contain anything else are meaningless FUD that are used - most often - to enforce arbitrary adherance to actively harmful cult practices.

How is code review?

A convenient way to make sure your own code reviews are on target is to use a small table of "review metadata" to categorize them. You could include two items to start: "Type" and "Blocking".

Type could be one of six options (but your values could vary):

  • Operation: This doesn't seem to accomplish the stated goal.
  • Defect: There seems to be a logic problem.
  • Security: This causes a security problem.
  • Performance: This causes a noticeable performance problem.
  • Thought: This made me think of something.
  • Suggestion: Here's another way to do this.

The blocking value of your metadata would depend heavily on which type you use, but it's safe to say "Thought" or "Suggestion" types would be non-blocking, while the rest would almost certainly be blocking issues.

Here's an example review metadata table:

Review Comment Metadata
Type Performance
Blocking Yes

That table might follow a comment like: "When I ran this in my browser, all of my fans spun up to their max speed. I checked the memory profile and this change consumes 800MB of additional memory when the page loads."

Suffice to say: If a review comment can't fall into any of these types, it probably doesn't have much value to the code author. Likewise: if none of a review's comments are blocking issues as defined above (Operation, Defect, Security, Performance), then the review should automatically be approved!

]]>
https://log.rdl.ph/post/a-good-code-review.html222nlSba5EwEywgIn2hM6JMon, 14 Nov 2022 14:45:00 GMT
<![CDATA[Mastodon for People Who Don't Like Mastodon]]>

Have you considered leaving Twitter, heard about Mastodon, checked it out, and didn't like it?

I'm sorry to hear that, but maybe this quick, no-BS explainer will help you get acclimated to something that isn't owned by a billionaire.

I don't like servers or instances or whatever. Where is mastodon.com?

It's mastodon.social. Yes, there are lots of other ones, but if you just want the "twitter.com of Mastodon," just sign up on mastodon.social.

Usernames are annoying

Okay.

Think of them like email addresses. Nobody could email you at @username unless there was a single, monolithic email service that controlled all email, so we use username@[your email service]. Maybe you use username@gmail.com, but you could also use username@hotmail.com and it still works because it's a standardized email protocol. Mastodon works the same way, so you would use @username@[your mastodon service]. In your case: @username@mastodon.social.

Nobody I like is there ☹️

Maybe, but also maybe not.

Anyway, Twitter was also a weird place in the early days with only the earliest adopters there, too. Maybe you can be a tech leader this time!?

But, like, where are all the interesting conversations?

Ah, well, they're happening, you just have to find the individuals having those conversations and follow them.

See, the biggest difference between Twitter and Mastodon is that there's no algorithm on Mastodon. There's no computer trying to juice engagement by showing you the hottest takes sure to elicit the most angry responses. There's no engagement boost when you favorite a post, or when you "ratio" a post. In fact, there's no incentive at all to posting or engaging except the real human part of that. Like something? Favorite it. Only you and the author will know. Isn't that a pleasant human interaction?

I want to retweet.

Sure.

They're called "boosts" on Mastodon, and you can do them as much as you want. The only difference is there's no Quote Tweeting. Quote Tweeting is a great way to dunk on somebody, but a terrible way to have a real conversation, so you can't do it. You can reply to them, though. It's much more civil!

I want to heart tweets.

Sure.

You can favorite (it's a star, like how Twitter used to be) posts. They're called "posts" on Mastodon (or "toots", but you can call them posts if "toots" sounds silly. Nobody cares. It was named by a person who didn't speak English as their first language, and they didn't know "toot" had a connotation other than the elephant sound in English.)

"toot" is cringe.

Saying things are cringe is cheugy.

Anyway, why is the sound an elephant makes (toot!) worse than the sound a bird makes (tweet!)? I like Proboscideans more.

I like apps. I want to use an app.

Sure. Define "app."

Haha, just kidding. I know what you mean.

One of the most popular Mastodon apps is Tusky (Android only). If you're on iOS, you can use the official app. For either platform, however, you might as well just install the PWA. Mastodon is a full-featured web application that you can install directly from the web page (use your phone's browser's "install" menu option). It looks and feels just like a native app, but it's always up to date and live just like a web site.

Why is following some people hard?

Basically, for the same reason that you have to type the full email address of a contact.

A good way to think of following people on Mastodon is like subscribing to a newsletter: Sure, you can go to someone's page and see the button to sign up for their newsletter, but you still have to type in your email address for their newsletter to reach you. You both have to be aware of each other for the connection to work.

There's a quick and easy work-around if you ever see that irritating window asking you to type in your username, though.

  1. When you're on a person's profile page, copy their username. Again, that's the bit that looks like @[the thing they picked]@[the server they're using]. Mine is @rockerest@mastodon.social.
  2. Go back to your own account.
  3. Paste the username of the person you copied earlier into the search field and submit the search.
    • Mobile: Click the "Explore" tab (it's the one that looks like a number sign or a hashtag).
    • Official App: Tap the search magnifying glass.
    • Desktop: The search input is at the top left.
  4. Press the "Follow" icon next to their account.

I can pay to edit my tweets and get fewer ads on Twitter.

Bummer!

All of that stuff is free on Mastodon! The latest version of Mastodon (which mastodon.social is using) has editing built in, and since there's no company trying to monetize your attention, there aren't any ads at all.

But if I pay on Twitter they give me a checkmark.

Well that seems like a great way to devalue identity verification - which should be a public service.

Anyway, Mastodon has something a little similar, but you self-certify by modifying a website that you control to point back to your Mastodon account. It's pretty easy to do by just adding rel="me" on a link to your Mastodon account.

I don't have a website.

Another bummer!

As you move out of centralized services like Twitter to "less centralized" services (like the federated services of Mastodon), it might be a good idea to consider who owns your data!

Before - on Twitter - Twitter owned all of your data. Leaving Twitter is hard because it's there - and absolutely nowhere else. In environments like Mastodon, you control your data, which means it's especially helpful to have your own website (it can be free!).

I don't want to get all starry-eyed in this "no-BS" guide, but if you're considering leaving a centralized service like Twitter, you might have that spark of individuality, too! A personal website is a great way to control who owns your voice and your data, and as a side effect, you can make the link on your Mastodon profile highlight green with a checkmark if you self-verify with a link back 😀.

What the heck; there are too many timelines!

I feel you.

If you're not following anyone, the best place to get started is your "Local Timeline." That's everybody else on your server. It's like all the tweets on Twitter.com.
Once you're following people, "Home" is your feed of posts from everybody you're following, in chronological order (remember, there's no profit-driven algorithm here, just an honest-to-goodness timeline).

If you really want to go crazy, the "Federated Timeline" is... well it's every post from every person on every server (that your server knows about).
This is like if hundreds of people were each running their own Twitter.com and each of those Twitter.coms each had thousands and thousands of users.

I don't find the Federated timeline all that useful because it's a massive firehose of posts, and - because Mastodon is used by people all over the world - it's almost never in a language I speak! (You can filter posts to only languages you know in your preferences, but most people don't seem to change their post language from the default "EN," so I still get many posts I can't read.

Stick to "Home" and "Local Timeline," they'll serve you best.

Why are lots of posts on Mastodon hidden behind a "Show More" button?

Some parts of Mastodon culture (and there are lots of them, so don't assume this is universal), feel that using the "Content Warning" ("CW") button when creating a post creates a super safe space for everyone on the platform. They put a general "subject" of the post, and then put the actual content inside the "hidden" content. If you're talking about very hard subjects, this is definitely kind to people who may not want to read about those things. It's also convenient when you're just talking about random topics like "politics" so people can choose whether they want to even engage with that.

However, using Content Warnings isn't a requirement. If you don't like it, don't feel obligated to use it. Some people might say that Mastodon culture always uses content warnings for any potentially sticky subject, but the bottom line is that there is no single "Mastodon culture," so do what feels right to you (but follow the rules of the server you're on.)


Do you still have concerns or questions about Mastodon?

You can contact me directly at @rockerest@mastodon.social and I'll try to add it to this document.

Happy tooting!

]]>
https://log.rdl.ph/post/mastodon-for-people-who-don-t-like-mastodon.htmlTV1e858aVUXmyGRBuFMCTSun, 13 Nov 2022 16:40:00 GMT
<![CDATA[Running Puppeteer on WSL2]]>

I have recently had reason to subset fonts manually and I found a tool called glyphhanger which subsets arbitrary fonts to arbitrary codepoints.

As with many of the cool, useful tools on the internet these days, it's something that Zach Leatherman built. Thanks Zach!

The thing - though - is that glyphhanger requires Puppeteer.

Installing Puppeteer in a non-graphical environment

I use Windows for all my day-to-day programming outside work, but I use Linux - via WSL2 - to actually make it a usable environment.

Puppeteer doesn't work out of the box in a "headless" Linux environment.

Thankfully, I found someone who had the same problem. They went through a lot more steps, so I thought I'd boil it down to just 2:

  1. Install chromium with sudo apt install -y chromium-browser
  2. Install one missing library dependency with sudo apt install -y libatk-adaptor

Enjoy GUI-less Chrome(-ium)

]]>
https://log.rdl.ph/post/running-puppeteer-on-wsl2.htmlUgQ4Z0eiEg23TFZ8NrFPbWed, 01 Jun 2022 02:15:00 GMT
<![CDATA[Here's How 2021 Is Going So Far]]>
[Compiled from a Twitter thread]

The amount of Stockholm Syndrome in web development and overall engineering is mind-boggling and overwhelming.

I don't even know where to start with people who love how hard frontend development is today.

Frontend is undeniably complex (and extremely so). However, it does not have to be hard, or compiled, or heavy (in runtime and bytes). These are invented problems.

"Imagine if literally everything wasn't terrible." doesn't work when the recipient loves how it is right now.

I'm so exhausted. Frontend development has devolved into a race to the most complicated solutions to already-solved problems and propped up by thoughtleadership that excommunicates anyone who dare speak out.

You fall in line or you are labelled "obsolete."

I think if this is how frontend web development is going now and into the future, I'd rather be obsolete. I don't want to be part of this travesty when we are ultimately mocked for it.

]]>
https://log.rdl.ph/post/here-s-how-2021-is-going-so-far.html4Cvr17gVlALXmWSF960CF4Thu, 11 Mar 2021 19:30:00 GMT
<![CDATA[Capitalism Has Almost Finished Destroying Software]]>

Just a little farther to go before we get a thinkpiece about how the world is "post-software" and we need something else [that we can suck boatloads of money out of before we discard its withered husk].

Myopia Is Caused By Capitalism

A bunch of companies made a lot of money in the last two decades, and they did it - primarily - by being first to market. The Silicon Valley, startup, "eat-the-world" culture created by that mindset has filtered out into every company that writes software in any competitive market. Naturally, this exclusively rewards myopia. There is no future value, there's only what you can ship out to customers tomorrow.

Of course the irony here is that the billionaire VC leeches pushing this mindset are making money on the future - but not too far into the future that everything crashes down under its own weight. Just far enough they can sell their stake and get rich.

Software Myopia Causes All The Rest of Our Problems

Education and production will follow the financial incentive. If every company is looking for the most rapid returns, they will choose the tooling that helps them get there. That's Virtual DOM tools like React - for now at least. The tooling is obviously and objectively inferior to a lot of other options, but its one redeeming characteristic is that you can get things done insanely fast. Maintainability be damned.

If every company is looking for React developers, that's what bootcamps will train, because that's what students want. And of course, the high hire-rate after graduation is a great statistic, too.

If the entire field around you looks like it's all framework development, there's no financial incentive to actually learn your programming language. After all, no one is hiring JavaScript developers, they're hiring React developers (and Vue developers, and...).


The technical debt accrued along this set of predefined rails (another tool in this exact vein, but it's backend) is absolutely massive. Any product of any size eventually collapses under its own weight. I've seen it happen - always in slow motion - at two separate companies. No surprise both are VC-funded.

This hell-cycle and the - obviously apparent - outcome are why people hate JavaScript and constantly build tools to replace it. It's also why companies hire more and more developers to throw at their software (who - of course - do the exact same thing, compounding the problem).

It's a Human Problem

The solution is undeniably simple: Value long-term quality over short-term profit.

I appreciate how utterly insane I sound, which is - of course - the problem. We can't change the financial incentives without changing the structure of our country's financial system.

Now, we could also fix this from the bottom up, too. All it would take would be every developer refusing to reach for the easy, fast solution and demanding to do things the right way.

By the nature of the situation in which we find ourselves, this is equally insane. Developers have constructed stories they tell about themselves: as capable, valuable individuals making a difference... "changing the world." Admitting that the entire basis for that identity was a lie is personally devastating.

"I have sacrificed both my craft and the product of my effort at the altar of faster deployments so that somebody above me can make a buck a little faster. I am essentially a machine that a capitalist can extract dollars from and then discard." is a real tough pill to swallow.

]]>
https://log.rdl.ph/post/capitalism-has-almost-finished-destroying-software.html76zmcVWzjSxF0mB0VwIwBhFri, 14 Aug 2020 14:27:00 GMT
<![CDATA[Fluxbus 2: 2 Flux 2 Furious]]>

This again.

I've talked about Fluxbus in the past. If you haven't read about Fluxbus, I (not very) humbly suggest it's the application architecture you need but have never heard of.

Astute readers might notice that I bemoan one of the worst things about Flux:

Every single listener is called for every single change

[…]

Since the subscription pattern in Redux is store.subscribe( someCallback ), there is no way to filter how many functions are run when the state updates.
Imagine a moderately complex application that has a state value like { messages: { loading: true } } and a few hundred subscribers.
How many of those subscribers care that the messages stop loading at some point? Probably 1. Wherever your chat module is, it's probably the only place that cares when the messages stop loading and it can hide a spinner.
But in this subscription paradigm, all of your few hundred subscribers are called, and the logic is checked to determine if they need to update. The inefficiency is mind-boggling.

Again, Flux doesn't quite have the exact same problem: not only because of the fact that you can have multiple stores for specific intents, but also because it doesn't prescribe any particular subscription pattern.
You could implement a pre-call filter that allows subscribers to only be called for particular actions that they care about, but this is so far off the beaten path it's never been seen by human eyes before.

And then go on to suggest the alternative - Fluxbus - which has an implementation like this:


function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

This is just wrapping the native Rx methods with a simple helper (in the case of publish) or our filter magic (in the case of subscribe). The subscriber takes a map of functions keyed on a message name. If that message name is seen, that subscriber is triggered.

Ladies and gentlefolk, this is what's called a Minimum Viable Product.

Explain yourself.

The problem here is that in a worst case scenario, this is exactly as bad as the Flux pattern where it calls every subscriber for every change.
Granted, that's the worst case behavior, compared to Flux's intended average behavior, but it's still not great.

Let me explain what I mean by worst case.
Let's say you have 1000 bus messages that you'd like to subscribe to. In a normal application, you might subscribe to 5 in one component, and 10 in another, and so on. This might result in close to 200 different subscribers.
As a refresher, you might subscribe like this:

subscribe( {
	"SOMETHING_I_CARE_ABOUT": ( message ) => { ... },
	"SOMETHING_ELSE": ( message ) => { ... },
	"ANOTHER_THING": ( message ) => { ... },
	"ONE_MORE_THING": ( message ) => { ... },
} );

In practice, you've added one subscriber, but it will hand off control to any of those four handlers when the bus recieves any of those four events.
Pretty decent, but the outer call is still dumb. There's no filtering at the top level, so every time you call subscribe, that's another outer subscriber that will get called on every message through the bus.

Again, I want to reiterate: this is significantly better than binding into every component and calling every handler on any message (or any data change, in Flux terms). Once a function like this is called, the Message Bus will determine that none of the message names match, and it won't call the handlers.

At this point, I'm sure you can imagine the worst case scenario. I'll rewrite the above:

subscribe( {
	"SOMETHING_I_CARE_ABOUT": ( message ) => { ... }
} );

subscribe( {
	"SOMETHING_ELSE": ( message ) => { ... }
} );

subscribe( {
	"ANOTHER_THING": ( message ) => { ... }
} );

subscribe( {
	"ONE_MORE_THING": ( message ) => { ... }
} );

Now we're close to Flux territory. While we're not running expressions inside component bindings to check whether the update is relevant to us or not, we're still calling every outer subscriber for every single message sent through the bus.
Reiterating one last time: even the worst case scenario here avoids unnecessary component cycles by never calling into the handlers if the message doesn't match.
Fluxbus with both hands tied behind its back still beats Flux on a normal day. But it's not ideal.

Fix it.

The key here is that Fluxbus was just the absolute thinnest of wrappers around an RxJS Subject. It's barely even there. Here's the code again:

function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

This is the absolute minimum extra code possible to implement subscriptions-by-message-name.

We can do a lot better by being just a little more clever.

The basic premise is that a Fluxbus instance can track handlers separately from handling inbound messages. So when a consumer calls subscribe, the internals of Fluxbus turn the message-name-to-handler mapping into a message-name-to-identifier mapping. Then it pushes the handler into a dictionary of all handlers and it's done.
Of course, that's not quite the whole picture - you need to reasonably handle unsubscriptions, too, and there are other little optimizations you can do along the way.
For the sake of our brains here, I'll present the MVP for this - sans abstractions and optimizations.

It gives us the code or it gets the hose again.

var bus = new Subject();
var handlers = {};
var listeners = {};

bus.subscribe( ( message ) => {
	( listeners[ message.name ] || [] )
		.map( ( id ) => handlers[ id ] )
		.forEach( ( handler ) => {
			handler( message );
		} );
} );

export function publish( message ){
	bus.next( message );
}

export function subscribe( messageMap ){
	var names = Object.keys( messageMap );
	var callbacks = Object.values( messageMap );
	var idMap = [];

	callbacks.forEach( ( cb, i ) => {
		let name = names[ i ];
		let id = uuidv4();

		handlers[ id ] = cb;

		if( listeners[ name ] ){
			listeners[ name ].push( id );
		}
		else{
			listeners[ name ] = [ id ];
		}

		idMap.push( { name, id } );
	} );

	return () => {
		idMap.forEach( ( { name, id } ) => {
			delete handlers[ id ];

			if( listeners[ name ] ){
				listeners[ name ] = listeners[ name ].filter( ( listener ) => listener != id );
			}

			if( listeners[ name ].length == 0 ){
				delete listeners[ name ];
			}
		} );
	};
}

My eyes.

You said you'd give me the hose if I didn't give you the code.

What's happening here?

Let's break it down.

var bus = new Subject();
var handlers = {};
var listeners = {};

bus.subscribe( ( message ) => {
	( listeners[ message.name ] || [] )
		.map( ( id ) => handlers[ id ] )
		.forEach( ( handler ) => {
			handler( message );
		} );
} );

In the first section, we set up an RxJS Subject as our bus, just like before. We also define two variables that are going to cause all of this to be "singletonish" which is a very technical term for "singleton-y" things. It's a closure, basically.
Then, we immediately subscribe to the bus. This is the only bus subscriber. It will determine which handlers to trigger when a message comes in.
Roughly, this subscriber goes something like this:

  1. Give me all the IDs of listeners for this message name
  2. Convert all those IDs into their real handler functions
  3. Call each handler function with the message
export function publish( message ){
	bus.next( message );
}

Then we have our normal publish function.

boring 🥱

export function subscribe( messageMap ){
	var names = Object.keys( messageMap );
	var callbacks = Object.values( messageMap );
	var idMap = [];

	...
}

Then, we get to the real magic: subscribe.
subscribe still accepts a dictionary that maps a message name to a handler function to be called. But now, we immediately split that map into the name keys we're listening for and the handler values to be called. It's going to be very convenient later to have these as discrete arrays. We need a way to keep track of the mappings we're going to create here, so we also initialize an empty array to store mappings.

export function subscribe( messageMap ){
	...

	callbacks.forEach( ( cb, i ) => {
		let name = names[ i ];
		let id = uuidv4();

		handlers[ id ] = cb;

		if( listeners[ name ] ){
			listeners[ name ].push( id );
		}
		else{
			listeners[ name ] = [ id ];
		}

		idMap.push( { name, id } );
	} );

	...
}

The next chunk begins by iterating over each of our handler functions and grabbing the message name associated with it and generating a new unique ID for that handler.

Say there fella, couldn't this be
Object
	.entries( messageMap )
	.forEach( ( [ name, cb ] ) => { ... } );
to avoid two extra variables and two iterations over the object to extract them?
……………………………… stop yelling at me i'm the internet's sweetie pie

Once we have a unique ID we immediately store our handler in a dictionary associated to it.

Then, we check our global listeners object for the message name. If we're already listening to that message name, we push our handler ID into the list. If we're not, we just create a new list.

To keep track of all our mappings in this subscriber, we push an object into the list of ids that includes both the message name and the handler ID.

export function subscribe( messageMap ){
	...

	return () => {
		idMap.forEach( ( { name, id } ) => {
			delete handlers[ id ];

			if( listeners[ name ] ){
				listeners[ name ] = listeners[ name ].filter( ( listener ) => listener != id );
			}

			if( listeners[ name ].length == 0 ){
				delete listeners[ name ];
			}
		} );
	}
};

Finally, we return an unsubscribe function. When a consumer calls subscribe with a map of message names and their handlers, they will expect to receive an unsubscriber that unsubscribes all of their handlers at once just like they passed in a single dictionary.

So we take our handy array of { "name": message.name, "id": handlerId } mappings and we loop over the whole thing.

We delete the handler associated with the ID we're iterating over. Then, we remove the listener ID from the list of all the listeners for that message name.
As a last cleanup check, if that message name no longer has any associated listeners, we just delete it from the dictionary of message names that we check in the bus subscriber.

Do math.

If we let n be the total calls to the subscribe and x be the total subscriptions for a given message (like "SOME_EVENT": ( message ) => { ... }), then...

The previous version of Fluxbus was O(n + x) for every published message.
That is: if your application calls subscribe 100 times, and in just one, single instance of those calls you subscribe to SOME_EVENT, there will be 101 function calls when a single SOME_EVENT is published into the bus.

This new version of Fluxbus is O(x + 1) for every published message.
That is: if your application calls subscribe 100 times, and in just one, single instance of those calls you subscribe to SOME_EVENT, there will be 2 function calls when a single SOME_EVENT is published into the bus.

Done?

Of course not.

The immediately obvious next step is improving this code. You can see such an improvement in the library I built to reduce my own application boilerplate. I've very cleverly named the implementation MessageBus.

Of course then there's syntactical stuff like what you were yelling at me about up above.
Code is never finished.

What do you think about Fluxbus 2?

]]>
https://log.rdl.ph/post/fluxbus-2-2-flux-2-furious.html4buTHtlgjgJbepJkBhDZCyFri, 31 Jul 2020 21:30:00 GMT
<![CDATA[Remote Work Q&A | Jamstack Denver]]>

Thanks!

Thank you to all the folks who showed up to talk about remote work and ask us questions on !

Wrap-Up

At previous remote work events, I didn't have a single place for after-the-fact answers, somewhere to collect all the resources I might have mentioned, and more.
I want to make sure there is a single place to collect everything, so I'm collecting everything here.

Video

Before the Q&A session, Mike Duke gave a great talk about his experience shipping his first production site, so for more insight into being a dev, I highly recommend watching his talk, too.
The majority of the event was recorded, so you can find that video below.

Resources

I find the resources below helpful for folks interested in learning more about remote work.

Other Questions

Do you have questions for me?
Send me an email!

]]>
https://log.rdl.ph/post/remote-work-q-a-jamstack-denver.html5opDfjiYKdELD94Oljlxi3Sat, 02 May 2020 20:09:00 GMT
<![CDATA[Remote Work AMA | DVLP DNVR]]>

Thanks!

I'm putting the thanks section first, because without the great organizers and questions from the attendees, this AMA event could have been a real bust!

So: thank you to the DVLP DNVR team and all the folks who showed up to talk about remote work and ask me questions on !

I think I had some pretty good answers to some of the questions, and some maybe okay(ish) answers to others!

Wrap-Up

During and after the event, I realized I had made a mistake by not having resources about remote work readily available somewhere for everyone to see!

After the event, I got some more questions via Slack (Denver Devs) and my email (rdl.ph).
I want to make sure there's a single place to collect everything, so I'm collecting everything here.

Video

The majority of the event was recorded, so you can find that video below.
The recording begins in the middle of a question asking what Jamstack is and if it's just for sites without servers (it was a nuanced question, so that's not exactly a good paraphrase!).

Resources

I mentioned a few links and other resources, so I've collected those below.

Other Questions

Some folks have asked me more questions after the fact, so I thought I'd collect those questions and my answers below.

  1. Advice for someone looking for a remote position as their first job?
  2. What are you looking for when you're evaluating remote candidates?
  3. How do I grow as a developer?
  4. What do you think about Svelte?

From Tobie Tsuzuki:

What advice do you have looking for remote jobs as your first technical role?

That's a tough one!
To be clear, my first job wasn't remote, so I can't give you any insight from that side of the picture.
However, I've hired people (remote & otherwise) and I've gotten jobs remote after my first, so here's what I think:

  1. First of all, be excited about the company.

    I admit that's something I can say from a place of privilege: if you don't have a job, anything is better than nothing - excitement aside!

    But it really helps you stand out when you have lots of experience with the company, like their product, use their product, have ideas of how to improve their product, etc.
    "Passion" is an overused and under-defined search criteria, but if I were to use "passion" as a hiring criteria, this is what I want.
    Do you want to work for this company on our product, or are you excited about something else?
    Being excited about the company will make you more successful no matter what the tech is, whether or not it's remote, etc.

  2. Practice/exercise and demonstrate the skills required for remote work.

    The skills for remote aren't really all that different than the skills for non-remote work, but often during hiring the onus is on you - the candidate - to show that you can exercise those skills.

    Sadly this isn't as often the case when hiring for "local" employees, but it should be.

    Those skills are (not exclusively!):

    • clear written communication,
    • excellent time management,
    • excellent self-motivation,
    • low ceiling for getting stuck (e.g. if you get stuck you immediately seek help),
    • the ability to self-direct (e.g. if you encounter a problem or lack of things to do you know how to find solutions or create your own work)

    Basically, if you can think of a skill you should have as a professional engineer, being able to demonstrate those skills (ideally with evidence!) is crucial to laying an initial foundation of trust for remote work.

  3. If you can, frame as much as possible as how remote work benefits the company!

    It's fine to say "remote work is better for me because my sleep schedule doesn't fit the average work time period."

    However, some companies might interpret that as: "I'm going to slack off all day and party all night."

    A better phrasing might be something like: "I think remote work is a great way to run a business because it means I can work in all the time I'm most productive throughout the day!"
    Now it's phrased such that the business sees how they're benefitting from you working remote!


And a followup:

When you are looking for 'passion' do you expect the candidate to have made projects or can demonstrate a working understanding of the product, or is a well crafted cover letter enough to get your foot in the door.

Yes!

A well-crafted cover letter is enough to get your foot in the door, but only that.

I would say that for any job interview, remote or not, you should know as much as you possibly can about the product.
I've used GitLab for many years, but before I interviewed, I read all of their product marketing, dove into the handbook and read up on policies and engineering structure, read guidance docs for the way they run reviews and code styleguides, etc.

I think if you want a job that necessarily means you should have at least a working understanding of their product, but probably more than that.

As for projects, that's kind of a tough one.

For someone coming to me for their first technical role, probably looking for a junior level position, I'd be less interested in their projects.
I know those projects probably wouldn't apply directly to my company in all but a small handful of cases, so I wouldn't be judging too harshly on those projects.
I would look for the following, however:

  • Has the candidate created projects outside of their school/bootcamp/training course?

    We all had to do schoolwork; did the candidate want to do more?

  • Has the candidate demonstrated proficiency in what they're applying for?

    Let's say you're applying for a front end position.
    If all of your projects are Python and Rails, I'm going to raise an eyebrow and wonder if you're going to succeed.
    I'll still give you a shot!
    But, if you can't show me in the interview that you know JavaScript pretty well, I'll probably take away two things:

    1. you're not a good fit and
    2. you don't know the difference between front end and back end!
  • Does the candidate have any specific things they especially like?

    As a junior / first time tech role, it's totally fine to not know specifics about where you want your career to go.
    But, if you can say: "I love the nuts and bolts of reactive data flow, and I've recently gotten really interested in Observable streams" that shows me that you've explored the depths of your field (front end in this case, it's just an example).

    Critical thought and "digging in" to tech is important to advancing, so showing those things early means I'm more likely to take a risk on you.


Following up again:

I really like the idea of digging into tech. What's your approach for furthering your understanding of certain techs?
So far since I am more junior and I don't know where I am headed, I have just been trying to learn any new thing that seems interesting, but after trying to learn multiple frontend frameworks it seems I have no deep rooted knowledge of any of them.

☝🏻 This is absolutely the Achille's Heel of front-end frameworks.
They abstract so much away and often invent their own syntax (thus requiring compilers and transpilers) that it often doesn't make sense for new learners to understand what's actually happening under the hood!

I would say the best way to start digging in is identify something that you think is important or cool, and then start searching for ways to write your own of that!

I know it sounds crazy, but it's the best way to really get into the fundamentals.

So here's an example.
Let's say you use React all the time.
You have learned or self-identified that the way React renders new things to the screen is really cool.
It's just so convenient for you as a developer to spit out the same HTML from your render function and then React magically updates the DOM for you... so good!

Okay! But how does it work!?
Build one yourself!
You don't need all of React, just build a static HTML page with some JavaScript included, and every time you call a function with a data object, the JavaScript should update the real page with the correct new HTML.

Should be moderately simple, right?
So it sounds like you're going to need to learn what React is doing (Virtual DOM <-> Real DOM diffing), which means you'll need to learn about virtual DOMs, which means you'll need to learn about document fragments in memory.
Then you'll need to learn how React chooses what to update (what counts as having changed?), and in fact how it maintains hooks into the DOM so it can update appropriately.

Start piecing these things together.
Your first pass can just be the worst thing ever, it's fine.
As you learn more and more about this really specific topic, it'll improve.

Most importantly, as you learn more, you'll identify either that this still intrigues you, or that it doesn't.
If it doesn't, you can start over, pick something new!

Maybe.... How do CSS transitions and transforms work?
Is CSS an animation library? Surely not? ..... hmm something to research.

Finally, I find the MDN Web API page incredibly illuminating. It shows me all the things I don't know.
I would recommend you take a look at the Interfaces section and see if anything jumps out at you.
I think bare minimum learning would be to understand all of the things that start with HTML and CSS.

I look at that list and I really want to dig into, for example, Speech* and Sensor* stuff.
Any of the RTC* (Real Time Communication) stuff also looks really cool.


Then Tobie asked a question about another meetup I run - Jamstack Denver:

I saw the JAMStack meetup and thought Svelte looks pretty smooth but not sure if will gain any traction what were your impressions of it?

Well... HMM 😅

I think it's a cool idea.
PROS:

  • I really like that it outputs raw JS (and a small amount of it) that runs in the browser.
    It doesn't ship Svelte to the browser, it just builds bare little functions that do the right thing.
    That's cool.
  • I like that - for the most part - it's JUST JS, HTML, and CSS.

CONS:

  • I've really, really burned out after 7+ years of compilers and transpilers and package management and plugins and configs.
    I don't like that Svelte literally cannot run in a web browser unless you compile it (or JIT compile it, like all the other frameworks)
  • Part of the reason for that is: I really dislike that Svelte has hijacked a valid piece of JS (the label) to indicate data binding (e.g. $: [some expression])

Personal stuff:

  • I used Ractive.js in the past and I loved it at the time!

    The guy who made Svelte (Rich Harris) also made Ractive.js, and Svelte feels like an incremental step forward - he essentially moved Ractive.js into a JS compiler instead of a frontend framework.

  • I stopped using Ractive.js because every frontend framework has been made obsolete by hyperHTML-style DOM libraries (like lit-html) and the platform-native component model baked into web browsers.
  • REDACTED 😅
    Tobie asked me what I thought, so I gave them my personal opinions!

I don't really think it's going to catch on.
Sadly, I think VDOM-based tools established themselves early, and it's hard to pry people away from that.

That said, VDOM is embarrassingly inefficient, so I hope that tools like lit-html and Svelte (not itself hyperHTML-like, but it's better than VDOM) eventually win out.

]]>
https://log.rdl.ph/post/remote-work-ama-dvlp-dnvr.html4nlVmqmRAKSpDHUETORmQ9Fri, 03 Apr 2020 23:53:00 GMT
<![CDATA[The Secret Feed (Warning: Boring)]]>
You're viewing a secret post! Read more about RSS Club.

You've Been Inducted

Since I now have a secret feed just for RSS subscribers (welcome!), I figured I should put something in it.
If you're not sure what I'm talking about - well, first of all, how did you get here!? More importantly, check out RSS Club.

RSS? 🤢🤮

To be clear, I don't feel that way about RSS, but I imagine some people do.
So why RSS?

I think social media is an unmitigated negative drain on society. Yes, there are bright spots, and for certain underrepresented groups of people, it can be a positive. But, in terms of the larger societal impact that social media has had, the negatives are... well... unmitigated.

That's not even to speak of the centralization and control that companies exert over content. Even ostensibly free content platforms like Medium began paywalling free content written on their platform for free by unpaid individuals who just wanted a place to share their thoughts and ideas.

Ultimately my point is that the less control we have over our content, the more likely it is to be taken away from us.

I know that running an RSS feed will not be for everyone. Maybe it's even too late to bring RSS back from the (almost) dead. My Inoreader subscriptions say otherwise, but perhaps it's a faux glut of content and it's going the way of the dodo. In any case, this web - one where the content is available on my terms, in a web browser for free, and in an RSS feed, delivered to readers for free, without restrictions - is the web I want to use.

If you want to join those of us who want a freer, simpler web here's how to participate (although I should note that - because this web is freer and simpler - I literally cannot make you do anything you don't want to do):

  1. Don’t Talk About RSS Club
  2. Don’t Share on Social Media
  3. Provide Value

Don’t talk about it. Let people find it. Make it worthwhile.
These rules are completely unenforceable, and that's the way I like it.

How's It Done!?

Like Dave, I'll provide the few details I have about how I made this work.

I use Contentful to store my content and a custom front end to generate the posts, so it was initially as easy as adding a field to my post types.

The new RSS Exclusive field showing a Y/N indicator and 'boolean' type
Pretty easy to set up. Only concern would be maxing out the number of fields for a content type, but the free account on contentful has fairly generous limits.

Once the field was set up, it's now just a matter of telling Contentful that this piece of content is RSS Exclusive.

The RSS Exclusive field in use on a piece of content, showing a set of radio buttons - 'yes' and 'no' and a 'clear' link. 'yes' is selected.
This post is marked as RSS Exclusive

At that point, the content is fetched from the Contentful API with the field included, so I just have to remove it from the main index.

import { sortReverseChronologically, isRssExclusive } from "../../common/Post.js";

function getPosts( promotion = "production" ){
	return sortReverseChronologically( [
		...getLongForPromotion( promotion ),
		...getShortForPromotion( promotion ),
		...getImageShortForPromotion( promotion )
	] ).filter( ( post ) => !isRssExclusive( post ) );
}

// common/Post.js
export function isRssExclusive( post ){
	return post.fields.rssExclusive || false;
}

Then, I just add a little header to the posts so the body itself has some descriptive content that I don't need to copy/paste to every post. I'll just show one here since all the post templates use the same code.

${post.fields.rssExclusive ? html`
		<code>
			<x-icon icon="rss"></x-icon> You're viewing a secret post! <a href="/rss-club">Read more about RSS Club</a>.
		</code>
		<br />
		` : ""}
	${unsafeHTML( post.fields.content )}

The same header is placed in front of the RSS item so it shows up right at the beginning of the RSS content, too.

Of course, I don't want to create another silo and break the web, so all of these posts just have regular URLs. Maybe that means this isn't really a truly secret RSS Club, but URLs are more important than our special club; that's sort of the whole point of this.

Welcome to RSS Club

So there it is: my first RSS Exclusive post.

Welcome to a simpler, freer web!

]]>
https://log.rdl.ph/post/the-secret-feed-warning-boring-.html73hI7P352hs34dosJ5vUBNThu, 02 Apr 2020 19:05:00 GMT
<![CDATA[Protect Your Users with HMAC Login Authentication]]>

Logins Send Passwords

"Huuu durrr" you say, "that's the point of a login."

I disagree!

The point of a login is to authenticate a person using the application. Sending their password to the backend is not necessarily part of that process.

There are essentially four vulnerable parts of an authentication system:

  1. Creating the account (all the sensitive data can be intercepted)
  2. Storing the credentials (they can be stolen)
  3. Logging in (it can be replayed or stolen by a man-in-the-middle)
  4. The human (they can re-use their password other places, etc.)

Of these, we can do pretty well in protecting against #2 and #3. We can't do much against #1 without drastically changing the typical sign-in flow. We also can't do much about most of the human factors that are out there, but we can protect our users against their password being stolen from us and stuffed into other websites, if they re-use passwords.

Of course, ideally, our database would never be stolen and our users would never have their networks compromised (leading to their credentials being stolen). "Ideally" is the wrong way to build an authentication system, though, so we're going to look at a way to deal with the non-ideal.

Granted, we will be using the most common method of sign-in (username plus password) and the most common method of storage (username plus hash). This is intentional.
We can get significant security gains by changing some of the magic between the user entering their information and the API authenticating them without making significant architectural changes.

Of course, you can make significant architectural changes for even better security, but that's a topic for another time.

HMAC is Better

"But how?"

Thanks for asking.

HMAC stands for Hash-based Message Authentication Code.

The basic premise is this: If I (the API) know a secret key that the user can continually regenerate using a secret passphrase, then we can both hash the same message and come up with the same result.
If we both come up with the same result without sharing the secret key (after the first time, more on that later), then I (the API) can theoretically assume that the user is the same person who originally created the account, because they've successfully generated the same secret key.

Once more, this time with (paid) actors.

Alice created an account on our website.
Our API - Bianca - has the secret key from when Alice created her account: EGG.
Alice knows the secret passphrase that will recreate the secret key: PAN.
When Alice visits the login form, she enters her username and her secret passphrase: PAN.
In her browser, our application uses a hash to recreate EGG. It then uses EGG to hash some data, like her username, into a third secret: SCRAMBLE.
The browser then sends Bianca the two pieces of data: the username and SCRAMBLE.
Bianca knows the same secret key already, so she does the same computation: EGG plus Alice's username results in SCRAMBLE. Since Bianca has come up with the same result as Alice asserted, Bianca assumes that Alice is who she says she is, and logs Alice in.

For clarity (and the more visual among us), here's a sequence diagram that might help.

sequenceDiagram autonumber participant FC as Frontend Crypto participant A as Alice participant B as Bianca participant BC as Backend Crypto Note over A: Alice enters her
username (alice)
and passphrase
(PAN) into the site
login form. Note over A: Alice clicks the
Login submit
button rect rgba( 0, 0, 255, 0.3 ) A ->> FC: "alice", "PAN" Note over FC: hash(
 "alice" + "PAN"
) = "EGG" FC ->> A: "EGG" rect rgba( 255, 0, 0, 0.25 ) A ->> FC: "EGG", "alice" Note over FC: hmac(
 "EGG", "alice"
) = "SCRAMBLE" FC ->> A: "SCRAMBLE" end end rect rgba( 0, 255, 0, 0.25 ) Note over A: message = {
 "username": "alice",
 "hmac": "SCRAMBLE"
} A -->> B: message end Note over A,B: Alice's secret passphrase, "PAN"
never goes across the network! Note over B: Bianca already
knows the secret:
"EGG" from when
Alice signed up... rect rgba( 0, 0, 255, 0.3 ) rect rgba( 255, 0, 0, 0.25 ) B ->> BC: "EGG", "alice" Note over BC: hmac(
 "EGG", "alice"
) = "SCRAMBLE" BC ->> B: "SCRAMBLE" end end Note over B: hmac "SCRAMBLE"
equals hmac in
message, therefore
username ("alice")
in message is
authenticated B -->> A: "alice" authenticated!

So the four parts of HMAC:

  • Hash-based: Alice and Bianca both perform a hash of some data (numbers 3 & 6 in the diagram)
  • Message: Alice (via our login form) sends a message asserting who she is to Bianca (number 5 in the diagram)
  • Authentication: Since the passphrase and key are not transferred, Bianca believes the authenticity of Alice's message if both Alice and Bianca come up with the same hash (number 8 in the diagram)
  • Code: SCRAMBLE is the code both Alice and Bianca generate. (numbers 4 & 7 in the diagram)

Interceptions: 0

Note that only Alice knows the secret passphrase PAN. It never leaves her browser.
This is how we protect Alice from having her password stolen by man-in-the-middle attacks. Even if she's on a compromised network, Alice's password never leaves her browser. If we wanted the additional security of a salt, we could change our signup and login flows to first submit the username, which would return the salt, and then the signup or login would proceed normally but using the salt to enhance the entropy of whatever passphrase is used.

We want to leave the login flow alone as much as possible, though, so we'll use Alice's username as her salt. It's not a perfect salt, but it's better than nothing. If Alice's password is password (shame on you, Alice), and her username is alice1987, at least our combined secret passphrase will be alice1987password (or passwordalice1987) instead of just password. It's an improvement, even if minimal.

Message Authentication 😍

We have one more thing we can do to protect Alice.

If someone is snooping on Alice's network while she's logging in - maybe from unsecured coffee shop wifi - they could grab her login request (remember, it's alice1987 and the authentication code SCRAMBLE).
Once someone has that info they could do anything with it. They could try to reverse the hash algorithm to figure out her secret passphrase, or they could re-use the same request to make Bianca think that Alice is trying to log in again.

The only thing we can do about the former is regularly audit our hash algorithms to make sure we're using best-in-class options so that figuring out the secret phrase will take longer than it's worth.
We can - however - protect against the latter attack, called a replay attack.

To protect against replay attacks, all we have to do is add a timestamp to the message! When Bianca checks the authentication request, she'll also check that the timestamp isn't too old - maybe a maximum of 30 seconds ago. If the request is too old, we'll consider that a replay attack and just deny the login.

The beauty of message authentication is that as long as our front end sends all of the information and it includes that information in the HMAC (SCRAMBLE), the API (Bianca) can always validate the information.

Show Me How Now, Brown Cow

The amazing thing about this is it just uses off-the-shelf tools baked into virtually every programming language. Even web browsers provide (some of) these tools by default as far back as IE11!
Sadly, one crucial piece - the TextEncoder - is not supported in Internet Explorer at all, so if you need to support IE, some polyfilling is necessary. Fortunately, everything here is polyfillable!

To get us started, we'll build a couple of helper functions to do basic cryptographic work.

function hex( buffer ){
	var hexCodes = [];
	var view = new DataView( buffer );

	for( let i = 0; i < view.byteLength; i += 4 ){
		hexCodes.push( `00000000${view.getUint32( i ).toString( 16 )}`.slice( -8 ) );
	}

	return hexCodes.join( "" );
}

export function hash( str, algo = "SHA-512" ){
	var buffer = new TextEncoder( "utf-8" ).encode( str );

	return crypto
		.subtle
		.digest( algo, buffer )
		.then( hex );
}

These two functions take a stream of data and convert it to hexadecimal (hex) and take a string and a hashing algorithm to create a hash (hash).
Note that SubtleCrypto only supports a small subset of the possible algorithms for digesting strings. For our use-case, however, SHA-512 is great, so we'll default to that.

Note that hex is synchronous, but hash returns a Promise that eventually resolves to the hexadecimal value.

Next up, we need a way to generate a special key that will be used to create the authentication code.

function makeKey( secret, algo = "SHA-512", usages = [ "sign" ] ){
	var buffer = new TextEncoder( "utf-8" ).encode( secret );

	return crypto
		.subtle
		.importKey(
			"raw",
			buffer,
			{
				"name": "HMAC",
				"hash": { "name": algo }
			},
			false,
			usages
		);
}

Here, makeKey takes Alice's secret passphrase, some algorithm (SHA-512 again, in our case), and a list of ways the key is allowed to be used.
For our intent, we only need to be able to sign messages with this key, so our usages can just stay [ "sign" ].

What we get back is a Promise that eventually resolves to a CryptoKey.

Finally we need a way to generate signed HMAC messages.

export async function hmac( secret, message, algo = "SHA-512" ){
	var buffer = new TextEncoder( "utf-8" ).encode( message );
	var key = await makeKey( secret, algo );

	return crypto
		.subtle
		.sign(
			"HMAC",
			key,
			buffer
		)
		.then( hex );
}

Here, hmac takes Alice's secret passphrase, and our message that we want to authenticate (plus our hash algorithm).
What we get back is a Promise that eventually resolves to a long hexadecimal string. That hex string is the authentication code!

When our API (Bianca) and our front end agree on the mechanism for creating messages to be authenticated, as long as the result from hmac on both sides agrees, the message has been authenticated!

Sign Up & Log In

So let's say all that code above is off in a file called Crypto.js.
Here's a file with two functions in it that create a new account for a user, and log a user in.

import { hash, hmac } from "./Crypto.js";

async function createAccount( username, passphrase ){
	var secret = await hash( `${username}${passphrase}` );

	return fetch( "/signup", {
		"method": "POST",
		"body": JSON.stringify( {
			secret,
			username
		} );
	} )
}

async function login( username, passphrase ){
	var now = new Date();
	var secret = await hash( `${username}${passphrase}` );
	var authenticationCode = await hmac( secret, `${username}${now.valueOf()}` );

	return fetch( "/login", {
		"method": "POST",
		"body": JSON.stringify( {
			"hmac": authenticationCode,
			"timestamp": now.toISOString(),
			username
		} )
	} )
}

Here's a sequence diagram for signing up.

sequenceDiagram autonumber participant FC as Frontend Crypto participant A as Alice participant B as Bianca Note over A: Alice enters her
username (alice)
and passphrase
(PAN) into the site
create account
form. Note over A: Alice clicks the
Account Create
button rect rgba( 0, 0, 255, 0.3 ) A ->> FC: "alice", "PAN" Note over FC: hash(
 "alice" + "PAN"
) = "EGG" FC ->> A: "EGG" end rect rgba( 0, 255, 0, 0.25 ) Note over A: signup = {
 "username": "alice",
 "secret": "EGG"
} A -->> B: signup end Note over B: Bianca stores "EGG"
in the table
"authentications"
for the username
"alice" B -->> A: "alice" account created!
The sequence diagram for logging in is identical to the one at the beginning of this post.

What About The API?

Good question.
It does the exact same work you've always done, just using HMAC now.

In the case of account creation, nothing has changed. It just so happens the secret we receive from the user is a bit longer (and more random) than usual. Store it safely in your authentications database table associated to the username.

For logins, your API will be doing a bit more work.
Here's a bit of rough psuedo-code (unlikely to run, never tested) for a node login back end.

import { createHmac } from "crypto";

function login( request, response ){
	var now = Date.now();
	var oldest = now - 30000;
	// fromISO from your favorite DateTime library
	var timestamp = fromISO( request.body.timestamp ).valueOf();
	var { hmac, username } = request.body;
	// db is a Database abstraction layer here, use your favorite!
	var secretKey = db.Authentications.getSecretKeyForUser( username );

	var authenticator = createHmac( "sha512", secretKey );

	// Note the parity here with the front end, we are hashing the same message
	authenticator.update( `${username}${timestamp}` );

	// If the one we created matches the one from the front end, we're authenticated
	var isAuthenticated = authenticator.digest( "hex" ) == hmac;

	// Except.......

	if( oldest >= timestamp ){
		// HEY THIS LOOKS LIKE A REPLAY ATTACK!
	}
	else{
		return isAuthenticated;
	}
}

I Zoned Out When I Saw crypto, How Is This Better?

Thanks for asking.
This is better than the standard authentication process for a number of reasons:

  • Better protection against password reuse.
    You can't protect your user from other websites stealing their password and stuffing it into your website (other than disallowing known-breached passwords, a topic for another time), but you can add a layer of protection.
    By only storing a hashed value based on Alice's secret passphrase, you reduce the potential of an attacker being able to steal your data and use it elsewhere.
    This is effectively the exact same policy as never storing plaintext passwords, except we're doing the hashing on Alice's computer, which means...

  • Better protection against man-in-the-middle attacks.
    Granted, you should be running your entire website over SSL with HSTS, but just in case you're not, or just in case someone manages to perform a complicated SSL downgrade attack (only possible without HSTS!), HMAC further protects the all-important passphrase.
    Since the only thing exiting Alice's web browser under most circumstances is the Authentication Code, it's a lot (years, maybe) of work to bruteforce that hash and extract the secret key.
    Then, that key is only useful on your website, so it's even more work to bruteforce the key and extract the original passphrase. Generally speaking, this kind of effort is not worth it to the average website attacker.
    If you run a website with government, banking, or sensitive personal secrets, I assume this is all old news to you and you're probably doing something even better... right?

  • Better protection against replay attacks.
    This is pretty straight-forward. Because we're mixing a timestamp into the message to be authenticated, an attacker has only one option to be able to pretend they're Alice after intercepting her login request: bruteforce the hash to extract the secret key, then update the timestamp and regenerate the message authentication code.
    Again, this is generally more effort than it's worth.

Is It All Rainbows?

Of course not.
Without a doubt, this is better than the standard login form that sends a username and password to an API.
However, it does still have some drawbacks.

  • Like almost every authentication system, it relies on strong cryptographic hashing.
    Because it uses lots of hashes, it's a bit more secure because there's more bruteforcing to do, but hashes are fundamentally based on the idea that it takes a really long time to break them, not on the idea that they're infallible.
  • On that note, the hashing and key derivation here is computationally expensive.
    Of course, this is inherent in the idea of using hashes; they are fundamentally intended to be expensive because if they were cheap and fast they would be easy to break. However, for users that are underprivileged (like in developing countries, for example) or otherwise running under-powered hardware, HMAC may cause non-trivial delays during login.
    For modern, full-powered hardware, it's likely your login flow will slow down by a few milliseconds, if it slows down noticeably at all. For under-powered devices, your users could see drastic slowdowns into the one second or worse range.
    Above all: know your audience. If your audience is entirely on under-powered devices, the resulting slowdown during login could be a deal-breaker. However, keep in mind that there is always a trade off between convenience and security. Perhaps the non-trivial processing time for your users will eventually be balanced out by your data being particularly secure if you ever get breached.
  • The secret is still sent across the internet.
    Once Alice's computer has hashed together her username and passphrase during signup it takes the secret key and... sends it across the internet.
    This is the only time that happens, but it only takes one snooping man-in-the-middle to ruin everything. There are ways to get around this problem, but they involve a very different signup architecture.
    In an effort to make this as usable as possible for people with boring old username/password signup flows, this will have to do; we need to get that secret key to the back end somehow.
  • There's nothing you can do about the human element without significantly changing your signup and login flow.
    Guess how hard it would be for an attacker to guess a username bill1968 and password billsmith1968.
    It's likely that you should be designing systems that detect and ban or discourage this kind of flawed input on signup, but ultimately humans are the weakest part of any security system.

So Should I Do It or Nah?

Yeah, you should do it.

It takes maybe an hour to set this stuff up, and the security gains are worth far, far more than an hour of your time.

Happy Hashing!


Thanks!

This post could not be what it is without help from the following folks:

  • Ian Graves: For explaining HMAC in a way that finally clicked for me so that I felt like I could go learn how to do it in a browser.
  • David Leger: For reviewing an early version of this post (that will probably never see the light of day) and reminding me to keep it simple.
  • Taylor McCaslin: For reminding me not to make it easy to copy/paste untested code, and for lots of insight into the complexities of client-side hashing and key derivation performance.
  • Chris Moberly: For a thorough review and for reminding me to include a note about HSTS and HTTPS/SSL.
  • Jeremy Matos: For reminding me that visual things are easier to understand, and prompting me to include the sequence diagrams.
]]>
https://log.rdl.ph/post/protect-your-users-with-hmac-login-authentication.html2jBZctncB2ndqkr7yvOTA8Fri, 13 Mar 2020 01:45:00 GMT
<![CDATA[Stop Writing Inhuman const]]>

All code is for humans. In fact, the only value in higher-order programming languages is communication with other human beings.

Phrased as the reverse, code is not for computers. If code were for computers, binary would be the only acceptable programming language.

Code is for humans, as I have written about at length before.

We Forgot That Words Have Meanings

File this one as another of the tsunami of reasons I have to regret being a software engineer. Somewhere along the line, the siren-call of tech above all else washed away the plain truth that other human beings should come first, no matter what the tech is or how it makes you feel when you're writing the code.
Words mean things.
JavaScript's const is a shorthand keyword for the word "constant." Constant has a very specific meaning in the world: it is things that do not change.
Even if you're using const in a language where the variable is truly constant in every sense of the word (which JavaScript is not, in most cases), you unequivocally have other tools to store values, and you use them regularly. You use them regularly because the code you write is intended to communicate with the next person reading it.
The message the next reader consumes is either "this value is constant" or "this value is not constant." That's because "constant" is an English word that has a meaning, and any programming language using a keyword anything like const is leveraging the real English word to provide context and meaning to the next reader.

King var

var is shorthand for "variable." How beautifully succinct! How flexibly true!
For almost all practical purposes, and in almost all cases, a variable is what's desired, and a variable is what's being used. If you have an application where every single code path is purely deterministic in every sense: congratulations, you have a program with no data or user input. It is in this strange scenario that you might be forgiven for using const exclusively. After all, nothing can be dynamic if there is no input, and the return values from the function calls are already known ahead of time (if you're even using functions; what's the point of encapsulation and modularization when the entire program is hard coded?).
var should be your go-to, always-on, default.

Edgy let

JavaScript has a very powerful scoping paradigm called "Lexical Scoping." If that gives you unrest, I strongly encourage you to deeply read this book by Kyle Simpson. Lexical scope is basically just "scope you can read by looking at the source."
Some people think that lexical scoping is bad. Because of this, they petitioned to have two new types of variable declaration keywords added to JavaScript (ECMAScript). let is one of these. let uses "Block Scoping," or - basically - "every set of curly braces is a new scope."
There are essentially two situations where block scope becomes useful: when using arrow function expressions, and when writing loops. Using the former is itself inadvisable for many reasons, but arrow function expressions have a handful of extremely strong use cases. For these, block-scoped let variables are very useful. Similarly, writing loops is generally inadvisable, but in those situations when a loop is the best tool available, once again let's block scoping is invaluable.
Note what's being asserted here: if you need to send a particular message to the next reader, that's the time to opt into a new scoping mechanism. If the new scoping mechanism isn't necessary or if - as is the case in the vast majority of cases - lexical and block scoping are identical, sending a confusing message to the next reader is wrong.
The let keyword is a signaling mechanism to the next reader that there is a conflict between the scope on the current line, and the lexical scope it may inherit. let is a flashing red light with a klaxon that says "BLOCK SCOPING IS NEEDED HERE FOR SOME REASON! PAY ATTENTION!".
let is a for a small handful of edge cases, and only for these edge cases.

Rarified const

Finally we arrive at const. A shorthand for the English word "constant." There is almost nothing in programming that is constant. There's even less in JavaScript that is constant, because const isn't even the thinnest veneer of constancy at all.
Any use of const is explicitly a lie to future readers. There is no guarantee of constancy in any form.
That said, language is imprecise, and "constant" is a pretty well-understood word. It would be a shame to be unable to use it. We also have plenty of wise advice against ever modifying built-in objects (the only way to break constancy for boxed primitives) , so with our fingers crossed we can decide to use const for a small set of variable declarations.
The only acceptable time to tell future readers of your code that something is constant is when you can be reasonably sure that on a reasonable timescale (say, a few weeks) the thing in the variable won't be any different.
This is the definition of constant, and when the programming language trades on the familiarity of that word, it must also depend on the definition of it. Here are a few examples of constants:

const AVOGADROS_NUMBER = 6.02214076E23;
const PI_TO_TWENTY = 3.1415926535897932384;
const ONE_HOUR_IN_MS = 3600000;
const THE_BOSS_MAN_ID = 42;
const COMPANY_PHONE = "123-555-6789";

Remember: every character in code is a form of communication to the next reader. Therefore, the only valid time to communicate that something is constant is when the declared variable is holding something that's constant in the real world that the software is modeling.

So to recap our options for declaring a variable:

  1. var is king. Practically everything is variable. The default scope in JavaScript is lexical scope. Reach for var first to allow future readers to lean on that default and understand that you have no special case scenario.
  2. let is for those special cases. let is a caution sign at the side of the road for the next reader: "This code needed to opt into a special case of block scoping, so pay attention!" In general terms, you should only use let in loops and arrow function expressions.
  3. const is for telling the next reader that a value is a constant. It's for signaling that you want the reader to opt into a mental model that corresponds with the world around them, just like they always do for everything else. const says to the reader: "This is something you can understand just by reading what's 'on the tin.' It's a constant!"

It's by no stretch of the imagination a perfect measure, but this rule is very close: if you're placing anything other than a primitive String or a primitive Number on the right-hand side of a const declaration, you're writing inhuman code.

But What About Some Other Mental Model!?

A mental model other than the one everyone is using for everything, all the time? The one where everything seen - including source code - is interpreted through a human being's experience of the real world?
Indeed, let's put aside that mental model and - purely for the sake of the exercise - pretend that the standard human cognitive model may not apply.
I have been subjected to a number of explanations for why const should be used more frequently. Of these, the only two that made any internal sense were these:

  • const isn't about value mutability, so acting like the assignment to a const is a constant value is the real lie.
  • The value of const is signaling that you don't intend to re-assign the variable binding at a later time.

It's not about value mutability, it's about modeling the real world

The first of these is the most trivial: Indeed, practically nothing in JavaScript is immutable (with the exception of actual primitives, but even these are dynamically boxed and unboxed at any time, so it's very difficult to get a truly primitive value orphaned from any possibility of being a higher-order type). Because practically nothing is immutable, it's entirely not useful to think about const in terms of value mutability.
The way to think about const, like the way we should be thinking about all things, is: "What is the human who will be reading this next going to think when they see this?" We all use a litany of mental shortcuts and cognitive abbreviations to get through our lives, and const will always be expanded by the brain to "constant." It's just missing 3 letters!
The way to think about const is to model it after the real world that the software is modeling, and that the human reading the code lives in. This is the model for const, not value mutability.
Put more succinctly: The code is irrelevant! It's the people that matter!

Programming has never had - and should never have - an intent-based programming paradigm

The second of these explanations is far more nuanced, and as with all things more nuanced, makes me much angrier.
There is no place in software development for communicating some other-worldly future intent. Indeed, this tenet has been concrete in software for practically all the time that software has existed.
There is no mechanism to magically push an author's intent onto some poor reader's mental stack, because doing so is an absolutely horrific thing to do to another person. The reason software works is because readers don't need to maintain a mental registry of which intent was registered in which order, and whether that intent has been fulfilled yet.

const-infers-intent is practically unfalsifiable

Especially - especially - when there's no falsifiable statement for that intent. If const signals "intent to not re-assign the binding," the only way to determine if that intent is fulfilled is to read the entirety of the source code from top to bottom. Only then - and only at that moment - can you say the intent was fulfilled. Any changes to the source at all will obviate your verification completely and you'll have to start all over.
The only useful measure of whether a variable is intended to never be re-assigned is never re-assigning the variable.

The madness of const

There's zero concept of a pop-fly ball that will come crashing down into the code randomly at some later time.

The madness here is this: saying that const communicates intent implies that the wider world of development should introduce an entirely new mental paradigm: the intent-based program.
Nowhere in all of JavaScript (perhaps all of programming?) have we ever done anything remotely like this. Code is always about now. What is the code doing now. It's the only thing that matters. Even code that inherently deals with the future state of things - like Observables and Promises - does so by interacting with an object... now. A stream of data or a promise of a response is created immediately, and dealt with immediately. The author binds listeners to data entering the stream, or they handle different states of the promised response. In some cases, the author hands the response object off to some other set of code for it to handle.
In every case, some future behavior is dealt with right now in the code. There's zero concept of a pop-fly ball that will come crashing down into the code randomly at some later time. This is what the idea that const-as-intent implies. It's an entirely new and baffling mindset that expects a reader to ingest a line of code as "future intent" and then - at an unspecificed and unconnected moment later - recall the intent.
This is madness, and I refuse to acknowledge ideas as valid if they make software less readable, and less comprehendable. If intent-based programming is the future of our world, may God have mercy on our souls.

Once more, for the folks in the back

  1. The intent of the author is irrelevant. Programming doesn't deal with what your intent is or was.
  2. Value mutability is irrelevant. const should be used to match the mental model of constant things in the real world.
  3. var should be the default variable declaration choice.
  4. let should be used when certain special cases are encountered, like arrow function expressions and loops (use both with caution!). Use let to tell the next reader that block scoping is different from lexical scoping, and it's needed right here!
  5. const should be used to pull magic numbers (like IDs, range limits, special numbers, etc.) or repeatable primitive strings (like a phone number or a special phrase) out of the code. Anything "constant" in the real world can be const in code.
]]>
https://log.rdl.ph/post/stop-writing-inhuman-const.html5YedERavmDPWhcgNDHJwlUFri, 31 Jan 2020 19:45:00 GMT
<![CDATA[Why I Use Web Components]]>

Twitter Hot Drama

I am functionally incapable of being part of the in-crowd cool kids when it comes to hawt hawt JavaScript trends, so here's my entry in the ongoing Twitter flamewar over native Web Components (aka Custom Elements, Shadow DOM, etc.).

The Reasons I Use Web Components

  1. I want to use the platform.

    "But [my framework] is the platform! It's all just JavaScript eventually!" you say.

    Ugh.

    I kind of can't believe that people have to counter these kinds of maliciously bad-faith arguments, but here goes: Yes. Yes, your favorite framework does use the platform, eventually.
    It does so in the same way that jQuery's $( [selector] ) function and "My text {{interpolation}}" "use the platform." I'm not seeing any of the framework people going to bat for going back to jQuery and Handlebars.

    I've seen the rise and fall of more frameworks than I can remember the names of, and the commonalities of them all are: A) You have to do it our special way and B) Whoops, we've dropped support for that. Time to rewrite your entire application.

    Use the platform isn't just a meaningless phrase we throw around (despite the big names in the frameworks trying to undermine it by doing so). It actually means something. It means building low-risk, high-resiliency applications and websites that leverage standardized, native features first, enhanced by proprietary code later. It means shipping less code that the end user has to pay the price to download and run.

    I genuinely believe that Use The Platform (not, Use The Parts of the Platform We, Corporations, Have Decided To Grant You Access To) is fundamental to software professionalism. Knowing the right time to reach for proprietary tooling (spoiler: it's as late as possible; later than you're imagining) is a sign of being an engineer.

  2. I want the product of my effort to last longer than I care about the result.

    This is another way of talking about craftspersonship.
    Have you seen a job posting for BackboneJS or KnockoutJS developers? How about AngularJS? .NET 3?
    I have. I've seen these job postings in the last month and when I do I shudder and click away. I also feel bad for these companies, because they're in this position because they employed coders who convinced the people around them that "this is the future." It's our responsibility as people who are ostensibly professionals to avoid locking the companies that we work for into vendor-specific code and library-specific coders.

    This is why the very first thing I ever wrote on this blog was about Convention Syndrome and it's cure, the web a la carte. It's not necessarily bad to use proprietary code. It is necessarily bad to tie your entire product to it. It's necessarily bad to tie your entire identity to it.

    By using native Web Components, I give future developers the gift of no learning curve. I give them the gift of "It's just built into the web browser, you don't have to ship any third party code." I give them the gift of standards-compliance - borne out of the fires of standards committees - baked into the tool that every consumer uses to run the application.

    When I ship native Web Components at the companies I work for, I'm giving them the gift of a job posting in the future that just says "You need to be really good at JavaScript and HTML and pretty good at CSS" instead of listing a litany of old libraries that just aren't what the cool kids are into these days.

    When I stop caring about the product of my craft, the product itself should be able to carry on without me for decades without input. That means fewer libraries, fewer build and compile steps, and less code.

  3. I want to use components.

    Not all of the time, but most of the time. Not for everything, but for most things.

    Here again, I see bad-faith arguments like, "But component-based designs are a solved problem!"

    No, they're not.

    The argument that - as long as you use [my favorite framework] - components are a solved problem is so boundlessly illogical I'm actually struggling to come up with a metaphor so ridiculous it fits.

    It would be like saying "America doesn't need a solution to the healthcare crisis because if you live in the UK you already have government-provided healthcare!" This is such a patently absurd statement that I'm again struggling to explain why it's so stupid.

    The point of a standardized component model is to free the web from framework churn. Building applications using these native APIs means that no matter what I layer on top, the application still fundamentally runs on a common component model. This frees libraries and application developers to innovate where it actually matters: at the very last edge where users interact with things.

    I want a component model. I want to encapsulate simple styles that only apply to small chunks of DOM. I want to be able to iterate over a list and stamp out clones of a small chunk of code and presentation for each item.
    I want these things, most of the time.

    What I don't want is to have a set of library authors or a multibillion dollar company tell me how and when I should do so, and how I should pass data, and all the other things the frameworks have decided is their responsibility.

    I just want the component model, and that's what Web Components gives me.

  4. I want free, backwards-compatible future upgrades.

    You can still use <marquee></marquee> tags, because web standards last practically forever.

    If you started using fn( ...[] ) back when spread was first standardized instead of Lodash's _.spread(), you got a performance upgrade on the scale of orders of magnitude. You may not have noticed, but you did. The browser engines began optimizing for in-the-wild use cases and it got incredibly fast.

    By using native, standardized Web Components, I'm shipping products that will last for decades and will get better over time. Critically, these improvements require no maintenance from me. No updating the library, fixing the regressions, re-compiling, re-bundling, and re-deploying. A static HTML file built with Web Components inline will just get better over the years.

So...

I use Web Components because I think it's the right thing to do.

]]>
https://log.rdl.ph/post/why-i-use-web-components.html3p9S7mNPoosHISsDlTla5rSun, 23 Jun 2019 23:48:00 GMT
<![CDATA[Your Company is Not a Special Snowflake Unicorn]]>

The funny thing about logical fallacies is that once you know them, you see them everywhere, but you still can't talk about them.

Your company probably has great culture, right?

You encourage transparency. Openness. Honesty and frankness above politics.
Right?

Try telling someone their statements are fundamentally illogical and stem from bad reasoning and anti-intellectual thought.

Conventions Are Signifiers in Common Culture

I'm going to crib a bit from Don Norman's fantastic (if demoralizingly dense) The Design of Everyday Things here; it is a phenomenal read and - after 31 years and only* one revision - it is still the preeminent book on the psychology of design (read: humans in the world).

A signifier is something that tells where an action takes place. A door indicates that it provides access to another, separated area, the knob on the door is the signifier that tells you where the action to open the door takes place.

The doorknob is a good example of a convention. Our entire culture (indeed, many cultures around the world), have been using a doorknob for a very long time. We all learn about doorknobs generally at a very young age, and that convention is now the standard.

Conventions Exist to Eliminate Mental Friction

Let me pose a question to you, reader. It is not a trick question, even though it seems like the answer must not be as obvious as it seems.

Imagine that you would like to get information about your organization as configured in an application that you are logged into. That application provides a main menu navigation item called "Organization". Within the resulting page, there is a sub-navigation with - among others - these two navigation items in it:

  • Users
  • Account

To find information out about your organization you choose:


If you chose Account, you are absolutely correct. The Organization > Account navigation heirarchy is where you would go to get information about your organization in this application.

It's important to note at this juncture that you and I didn't make this up just now. We're using something called convention. I'm not teaching you a new word, you know it, and I know you know it. You know that you get organization information from Organization > Account the same way that you know how a doorknob works: it's a standard convention that you - and everyone else - learned a long time ago.

Sometimes it makes sense to break convention. Usually these times are when you have lots of data that your new way is much better, and the benefits outweigh the costs. It never makes sense to install a set of hinges on a door and claim they're the handle.

We rely on conventions to avoid having to constantly relearn new design languages.

Well in OUR house, you open the door by prying on this second set of hinges!

No.

No. You are an absolute madman, and your decision is dumb.

Breaking Things Isn't a Personality

Imagine the above quiz where I asked what navigation you would use to find organization information. Now, imagine the answer was the opposite: to find some organization information, you do not go to the Account navigation item (it's not there!), you have to go to the Users navigation item.

I think that, like me, your response would be something like:

Ah yes the users page
For all your company info needs

If the response to your disbelief is like the one that I was given:

Different products have different metaphors and are solving different problems

...look out!

This has four serious problems:

  1. It's a meaningless statement - a vacuous truth (an informal fallacy)
  2. It's a non sequitor argument (a formal fallacy)
  3. It's an example of special pleading (another informal fallacy)
  4. It conflates metaphoric commutation with egregious violations of convention

Meaningless Statements Are Not Deep Statements

If the wheels on my car only turned 5° and I told the manufacturer about it, and their response was:

Different vehicle wheels have different steering degrees and turn to solve different problems.

...again: no.

This is exactly as useful as the manufacturer replying that "some cars are red and some people like red cars." It is simultaneously true and utterly meaningless in this context.

This brings up an important point: just because something is true does not make it insightful, useful, or applicable. This is yet another informal logical fallacy: the irrelevant conclusion.

Your evidence may be persuasive, perhaps even objectively correct, but if it's not relevant to the larger point you are - and I quote another name for the irrelevant conclusion fallacy - missing the point.

Irrelevant Things Don't Matter

A non sequitor fallacy is a formal fallacy. In fact, it is the formal fallacy. It literally means "it does not follow." One specific type of non sequitor argument is "Affirming the Consequent."

Affirming the consequent takes this form:

  1. If A, then B.
  2. B is true.
  3. Therefore, A is true

A may in fact be true in this case, but the evidence ("B is true.") is not sufficient to show that A is true.
Restated with our navigation question:

  1. Our application should put organization information in the Users page because products have different metaphors and solve different problems.
  2. Products have different metaphors and solve different problems.
  3. Therefore, our application should put organization information in the Users page.

Even if the consequent wasn't meaningless (see above section), the explanation given does not magically provide evidence for the antecedent.

Of course, this formal fallacy only applies if the explanation was given as a good faith explanation for the illogical placement of the organizational information. If the statement was not intended as a formal explanation for why a thing is the way it is, a formal fallacy cannot - by definition - apply. I have to assume that arguments like this are given in good faith as a legitimate attempt at persuasion, but there's always a chance they may not be.

That's alright, though, because...

Your Company Isn't Special

Special pleading is a kind of informal logical fallacy that attempts - with faulty or zero reasoning - to carve out an exception to a widely accepted principle.

It essentially posits that, yes: these conventions exist for the world at large, but - for no reason - we are special and don't need to follow conventions.

Take a look and this might be the fallacy that pops up the most often at your company.

You can make accurate estimates, unlike everyone else.
You can use modals (These are popups. Modals are popups. Say it with me: a modal is a popup.) because you'll be responsible, unlike everyone else.
You can put organization information in the Users page because your products have different metaphors and solve different problems.

Guess what? Your company is not "disrupting" the "where-are-my-settings" arena. You're not the AirBnB of settings pages.

Speaking of "settings"...

Babbling Jibberish Isn't a Metaphor For Something, It's Just Jibberish

A metaphor is a way to reference something without actually talking directly about that thing.

Here's a metaphor: The world is a stage.
You see, the world is not actually a stage, but now we can talk about how people in the world are just playing parts (actors on a stage!) and how people are born and die, and how they come into your life and leave your life (entering and exiting the stage!).
You see, we talk about the stage as if it were the world, because we've set up an apt metaphor that is roughly commutative.

A metaphor for organization information is "Account". It is "Settings". It is even "Profile" or "About".

"Users" is not a metaphor for organization information, and it never will be. You cannot hand-wave away nonsensical violations of very normal conventions by positing something untrue.
Just this mere point alone is enough to make the entire statement meaningless: Yes, products use different metaphors, but that's relevant to neither the specific matter at hand nor the general topic at hand. Metaphors have nothing to do with nonsensical application heirarchy.

Are You Not Entertained Exhausted?

This is exhausting, right?
No, it's okay, you're exhausted by me, by this post. I get it. Remember up top? When I said "you still can't talk about them." You think I want to be this fucking asshole? You think I want to be the guy who quotes at least 4 different logical fallacies in one compound sentence?
This shit exhausts me. I'm exhausted by being unable to have actual discussions about actual things because every foregone conclusion is riddled with logical fallacies and outright untruths.

I titled this post "Your Company is Not a Special Snowflake Unicorn" because I think this kind of behavior is rampant at SV-funded, YC-grad-type companies. "Startup" is a euphemism for "savior."

Really this post could just be titled "I'm So Very Tired."


* I fibbed a bit. Yes, The Design of Everyday Things was only revised once, but Norman himself says that he "rewrote everything."
† Okay, Billy Shakespeare came up with the metaphor.
]]>
https://log.rdl.ph/post/your-company-is-not-a-special-snowflake-unicorn.htmlJJfGK3OMpiduSthOK9wvBThu, 28 Mar 2019 15:00:00 GMT
<![CDATA[Fluxbus - an application architecture pattern]]>

Facebook the Behemoth

Facebook is pretty good at a couple of very interesting things.
It also has two and a quarter billion monthly active users. The world has around 7.6 billion people in it, which means something like 30% of the world uses Facebook at least semi-regularly.

Let's stipulate - for a moment - that you are not Facebook.
Let's stipulate - just for now - that you do not have 2.25 billion users. Or even 1 billion. Or even 1 million. If you have 1 million users: congratulations! You are 4.44e-4 the size of Facebook, or .000444 the size - 4.44 hundred-thousandths the size.

The red block is you, with 1,000,000 active users!

Given that stipulation, can we agree that you do not have the same problems Facebook has? You have the same problems that Facebook has in the same way that - say - your household has the same budget problems that the U.S. Government has.
I have yet to write at any length about the negatives (and it is all negatives) of cargo-cultism (other than another one-off comment), but suffice to say: if you think anything a company like Facebook is doing solves the same problems you are having, you are woefully deluded - by your own doing.

Now, this doesn't mean that technological accomplishments that a company like Facebook is doing can't be applied to everyone else (and we are really everyone else; Facebook is literally in a league of its own in terms of userbase).
It's just that the things Facebook is doing need to be tweaked.
Take - for example - the Flux architecture (and specifically it's outspringing: Redux). These things seem to look pretty good, but once you really start using them heavily, you begin to make all kinds of sacrifices because they prescribe an application structure.
For some, this may be fine. Indeed, I don't think many people will disagree that a pattern like Redux is prescriptive. "Yes. That's kind of the point," they will say. Unfortunately, I have found that this level of prescriptivism doesn't work well for me.

Why Redux (and Flux, sort of) Doesn't Work For Me

  1. Asynchronous is really complicated

    Have you ever used Redux? If you have, then you probably encountered a huge problem almost immediately: You can't just load data any more.
    Since everything has to result in a synchronous change to the store that subscribers then consume, your actions have to be thunks, and you probably need to use a complex middleware to interrupt the normal action processing pipeline until your async action finishes.

    This isn't strictly necessary with either generic Flux architectures or with Redux specifically, but if you implement asynchronous behaviors outside the "Redux way", well, you're not doing it the "Redux way".
    All sorts of things start to get more complicated. And - after all - isn't the point to standardize the way your data flows in one direction?

    Because Flux is much more general, I think asynchrony fits much better into that pattern, but it's still not easy: there's a whole example project just for async.

  2. Every app I've ever worked on uses more than one type of storage

    In most of my applications, there's a heirarchy of data. The absolute source of truth for 1 or two bits of information is a cookie (this cookie contains an authentication token, maybe one or two other things).
    We need a cookie because we need login to work across multiple subdomains, and solutions that don't use cookies are a ridiculous re-invention of the top level domain cookie that's worked forever.
    When the application(s) start, they check this cookie (what if the user logged in as a different user between the previous load and now?) and load the correct information from it into localStorage, sometimes triggering API requests to get more complete information and store that information in localStorage.

    During runtime, the application might change certain values in localStorage that need to persist to the next load, but don't need to go back to the server.
    Even if the application doesn't do that, it absolutely will store a large amount of data in one (of many possible) indexedDB databases.
    This data is indexed, timestamped, and keyed by request URL (if you request a?b=1, and we have that data, you get a free resolution, no network needed!).

    Finally, also during runtime, the application stores some information in the "window" storage also known as the global object in a custom namespace. This is just a scratch space for one-off memory, but it's very convenient for storing something complex like a function. Like a router.

    Flux (and Redux) give you exactly one kind of storage - the latter, a runtime-level storage. You can freeze it to a cold state and hydrate it again on app load, but this feels immensely wasteful.
    Why is my data not just there to begin with? That's the place it lives, why am I storing it in the Flux store as a dumb middleman when I could just be using the actual storage directly?

  3. Every single listener is called for every single change

    This one is a shared problem in both Redux and Flux.
    It's particularly bad in Redux, which demands that you use a single store for everything, but it's just scaled down to each Flux store.

    Since the subscription pattern in Redux is store.subscribe( someCallback ), there is no way to filter how many functions are run when the state updates.
    Imagine a moderately complex application that has a state value like { messages: { loading: true } } and a few hundred subscribers.
    How many of those subscribers care that the messages stop loading at some point? Probably 1. Wherever your chat module is, it's probably the only place that cares when the messages stop loading and it can hide a spinner.
    But in this subscription paradigm, all of your few hundred subscribers are called, and the logic is checked to determine if they need to update. The inefficiency is mind-boggling.

    Again, Flux doesn't quite have the exact same problem: not only because of the fact that you can have multiple stores for specific intents, but also because it doesn't prescribe any particular subscription pattern.
    You could implement a pre-call filter that allows subscribers to only be called for particular actions that they care about, but this is so far off the beaten path it's never been seen by human eyes before.

    The alternative is to only pull state during rendering but if you do that - oh my - what is the point of a centrally managed store? Isn't the value that your components are always up to date with the same state changes?
    Sure, you could have a single subscriber at some very high level that's making the decision to pass new data to its children components, but then why does this component know about the store at all?

The Part of Flux That Works For Me

The strict actions.

Dear sweet ${user.favoriteDeity}, standardized, universal, consistent actions are the best.

Every possible application workflow either starts with, ends with, or both starts and ends with a known action.

I honestly don't know why this pattern isn't used more often. When every part of an application that works on a "global" scale (think: routing, changing views, loading data) is codified and standardized, all you have to do to figure out how something works is follow the trail of actions it triggers.

You might have noticed above that there are some actions that don't seem to correspond to something going into state. You're right, this architecture isn't about resolving everything to state, it's about connecting the disparate parts of your application together without complex inter-dependencies.

This is the part of Redux/Flux that they get absolutely, gloriously correct: one, single, way for the application to encapsulate behavior.

The Best of Flux: Fluxbus

As mentioned above, the part of Redux/Flux that is good is the standardized messaging. The parts that are bad are myriad, but not the least are overly cumbersome data flow, restrictive storage, and the shotgun approach to calling subscribers.

The problems boil down to the fact that the Flux architecture is prescribing too much. If the prescription was simply "All application changes are made by publishing a standard action" then it wouldn't have any of the problems.

So, let's do just that. Fluxbus requires that all application changes are made by publishing a standard action through a central message bus. There's nothing more to it than that.

Example Implementation

// We're using rx to avoid having to implement our own observables
import { Subject } from "rxjs"; 

var bus = new Subject();
var LOGIN_REQUEST = {
	"name": "LOGIN_REQUEST",
	"username": "",
	"password": ""
};
var LOGIN_SUCCESS = {
	"name": "LOGIN_SUCCESS"
};
var LOGIN_FAILURE = {
	"name": "LOGIN_FAILURE",
	"problems": []
};

function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

subscribe( {
	"LOGIN_REQUEST": ( message ) => {
		fetch( "https://example.com/api/login", {
			"method": "POST",
			"body": JSON.stringify( {
				"username": message.username,
				"password": message.password
			} )
		} )
		.then( ( response ) => response.json() } )
		.then( ( json ) => {
			if( json.problems.length == 0 ){
				publish( LOGIN_SUCCESS );
			}
			else{
				publish( Object.assign(
					{},
					LOGIN_FAILURE,
					{ "problems": json.problems }
				) );
			}
		} );
	}
} );

What's going on here?

// We're using rx to avoid having to implement our own observables
import { Subject } from "rxjs";

var bus = new Subject();
var LOGIN_REQUEST = {
	"name": "LOGIN_REQUEST",
	"username": "",
	"password": ""
};
var LOGIN_SUCCESS = {
	"name": "LOGIN_SUCCESS"
};
var LOGIN_FAILURE = {
	"name": "LOGIN_FAILURE",
	"problems": []
};

This is just setting up an RxJS Subject and three actions. Note that this is what a bus is. That's it. You can publish messages into it, and you can subscribe to published messages.

function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

This is just wrapping the native Rx methods with a simple helper (in the case of publish) or our filter magic (in the case of subscribe). The subscriber takes a map of functions keyed on a message name. If that message name is seen, that subscriber is triggered.

subscribe( {
	"LOGIN_REQUEST": ( message ) => {
		fetch( "https://example.com/api/login", {
			"method": "POST",
			"body": JSON.stringify( {
				"username": message.username,
				"password": message.password
			} )
		} )
		.then( ( response ) => response.json() } )
		.then( ( json ) => {
			if( json.problems.length == 0 ){
				publish( LOGIN_SUCCESS );
			}
			else{
				publish( Object.assign(
					{},
					LOGIN_FAILURE,
					{ "problems": json.problems }
				);
			}
		} );
	}
} );

This sets up a subscription that waits for a login request. When it sees one, it tries to log in with the provided username and password. If that login succeeds, this publishes a LOGIN_SUCCESS message. If the login fails, it publishes a LOGIN_FAILURE message.

Of course, you might want to organize this a bit better.
You could put all your messages in a folder. You could put all the stuff that sets up and interacts with the bus in a file like MessageBus.js. You could put all your universal responses (like to logging in) in action files like actions/Authentication.js.
Then, when your application starts up, it might look like this:

import { getBus, publish, subscribe } from "./MessageBus.js";

import { registrar as registerAuthenticationActions } from "./actions/Authentication.js";

import { message as REQUEST_LOGIN } from "./messages/REQUEST_LOGIN.js";

var bus = getBus();
var partiallyAppliedPublish = ( message ) => publish( message, bus );
var partiallyAppliedSubscribe = ( map ) => subscribe( map, bus );

registerAuthenticationActions( partiallyAppliedSubscribe, partiallyAppliedPublish );

// More stuff?
/*
	Automatically log in with a demo user on startup?

	publish( Object.assign(
		{},
		REQUEST_LOGIN,
		{ "username": "demo", "password": "demopwd" }
	), bus );
*/

Footgun

This architecture is not for the faint of heart. A big part of the draw of something like Redux or Flux is it tells you what to do all the way down to the UI layer. Then, you're expected to use React for the UI, and that framework will also tell you what to do.

If you don't feel like there's enough structure here for you, you're probably right.

Fluxbus expects that you will exercise careful, defensive programming techniques. Because you have the freedom to make an enormous mess of things, you are expected not to.

Note - of course - there's no talk of data stores or dispatchers or reducers. Those are all the unnecessarily cumbersome bits of Redux/Flux. You want a data store? Great! Find one you like, or just use the many extremely robust storage options built into the browser. Do you need a way to reduce changes to data into your store? I suggest Object.assign( {}, previous, new ); but literally the whole world is open to you.

]]>
https://log.rdl.ph/post/fluxbus-an-application-architecture-pattern.html46HQUjRacgiCcQWqqQku2iMon, 07 Jan 2019 04:05:00 GMT
<![CDATA[Tabs vs. Spaces: One Right Answer]]>

Déjà Vu

Yes, I've talked about this before. I wrote that "Whitespace Is The Invisible Code Inside Your Code" and that "code is not primarily for doing things with code or computers; it's for communicating intent to the next person". Two data points is a line, three data points is a trend.

This is a trend I'm fully upfront about, though. I believe software is not necessarily about doing things faster ("Quicker is not necessarily better. Viva Code Liberte."), nor is it about getting to an arbitrary milestone faster ("despite being able to present a prototype in a short amount of time — the developer is rarely able to create a truly innovative end result.").

I think that's all of the self-referencing I'm going to do. Come with me while I explain why the Tabs vs. Spaces argument is a symptom of narcissism and should be dismissed. There is a correct choice, and those who argue against it are selfish.

Dumb Arguments

First of all, let's avoid the pitfall of arguing about keyboards. When we talk about tabs versus spaces everyone is pressing the Tab key on their keyboard. If you're a spaces person and you're mashing on the spacebar to get the right indent level you are an actual psychopath.

This argument is fundamentally about what your editor does when you press the Tab key. So let's define the terminology here:

  • spaces is when you press the Tab key and the editor inserts some number of space characters (usually called a soft tab).
  • tabs is when you press the Tab key and the editor inserts one tab character (usually called a hard tab).

Secondly, let's not get into talking about file sizes. *heavy sigh*. Nobody cares. If the process by which you are delivering this code is filesize-sensitive and you are not already using some kind of compression algorithm which would collapse all whitespace into a single replacement, you are doing your deployments incorrectly. Quick tip here: gzip will do this for web applications and your web server supports it and you need to turn it on.

Is there anything else? We're not talking about file size and we're not talking about what you mash your meatsicles into. We're talking about editor settings.

The Only Good Argument

The only good argument in this conversation is the exact same argument that should be made for every other code decision: what is the best choice for developer quality of life and code readability in the future?

I used to be staunchly spaces. The idea is basically: spaces look the same for every developer. The way the code is authored is the way it's displayed to every developer forever. This is a garbage argument (see how I snuck this in here instead of in the bad arguments section?). The author should not be determining how the future reader looks at the code. Do you tell other people what their editor theme should be? Do you tell them what their font size should be? If you do, you're a fascist and goodness please quit your job to save the rest of us from you.

How code looks to each developer is something that is personal to each developer, and it's not the author's business to decide that.

So, let's break down the information that we have to encode into the source:

  1. The line needs indentation.
  2. Which level of indentation.

Note here that when I say that we "have" to encode this into the source, I come to that conclusion because this is the choice that lends itself best to future code readability and developer quality of life. You may notice some kind of pattern emerging here.

Sure, you don't have to encode indentation (yes/no) and indentation (amount) into your source code.
You also don't have to encode line breaks into your source code, but I suspect that you put line breaks in there.
So you're already on the "this-is-helpful-for-readability" train. If you weren't, you'd be writing all your code on a single line. You know. Like a monster.

The Choices

Spaces encodes three things into the source:

  1. The line needs indentation.
  2. Which level of indentation.
  3. How to display the indentation.

It's that third item that's the problem. Whether a developer wants to display each level of indentation as 4 spaces or 2 spaces or 9 spaces or a Fibonacci sequence of spaces for each level; that's up to them. The only way to change how a bunch of spaces looks is by adjusting the font size, which has all sorts of other - practically unrelated - impacts on the reader.

Consider, instead, the tab character. This encodes only two things into the source:

  1. The line needs indentation.
  2. Which level of indentation.

It is up to the editor viewing the source - and only the editor - to determine how to display that tab. Just like it's up to the editor how to colorize the code. Just like it's up to the editor to determine what font size to display the code in.

The Illusion of Choice

So really, there isn't a choice here. If you choose spaces, you are explicitly choosing to enforce an authored display decision on future readers. If you choose tabs, you are choosing to encode only the pertinent information and let the future reader decide how to display that.

In other words, "spaces" is a result of a mindset that puts the author - at the time they write the code - at the center: What does the author think this should look like? Spaces enforces that decision to everyone else; an outpouring of narcissism that reverberates across time.

"tabs" is a result of a mindset that believes future developers can make their own decisions about how they look at code. It's a live and let live mindset that every developer should strive for: "Here's a thing that works for both of us, even though we've both made different decisions layered on top of it."

Choose tabs. Viva Code Liberte.

]]>
https://log.rdl.ph/post/tabs-vs-spaces-one-right-answer.html4IfsQvVn6wMS4q6C8EsWKQWed, 29 Aug 2018 16:00:00 GMT
<![CDATA[Making a Game in JavaScript - Part 1]]>

Hello.

It is me, 28 year-old Tom.

I have ceased to gain any enjoyment from video games.

This is a major depressive event in my life, and I'm struggling to cope with it.

While I figure this out, would you like to come along with me while I try to teach myself game development in Phaser 3?

I promise it'll be a hilarious shitshow, since the current version of Phaser is 3.5 and the documentation isn't even online yet. Better yet, I have no experience with game development except for this tile renderer I built and this implementation of the only useful Phaser 3 documentation - "Your First Phaser 3 Game"

Step 1: What to plagiarize?

First things first: pick a suitable first game.

Pong was arguably one of the first "video games" ever created, and features extremely simple gameplay:

  • Two paddles move vertically on the left and right sides of the screen.
  • A ball moves around the screen bouncing off the top and bottom and off the paddles.
  • There's a dashed line down the middle.
  • There's a simple integer score at the top of each player's side - first to 11 wins (failing to return the ball is a point for the opponent).
  • The game makes a noise when the ball: strikes the floor or ceiling, strikes a paddle, or exits the playing field

There are a couple special features that the original developer put into the game:

  • The ball accelerates slightly the longer it's been in the game.
  • The paddles can't quite get to the top of the screen (a hardware bug, in the original!)
  • The paddles are divided into eight sections, where the center two segments return the ball at 90° to the paddle (perpendicular), and each segment moving outward returns the ball at slighly smaller angles.

Some other things I noticed about the original game:

  • Boy, this thing is a lot harder than I remember from when I played with a home TV console kind of like the one pictured below.
    A seventies style pong console, with vertical slider inputs 'docked' to the left and right of the center control console, which has the game configuration buttons and dials.
    The in-game paddles are tiny and the analog control makes lining up a defense difficult. Maybe this was just a function of the arcade / cabinet implementation of the first iteration.

  • When you win, the paddles disappear and the ball bounces around as if there are walls just off-screen on the left and right. The score stays up - mocking the loser.

Step 2: What do I need?

Without question, I'm going to need 4 static assets:

  • A single, white, 1px by 1px graphic.

    I can scale this to any size I want with Phaser's built in image scaling, and all of the game is some combination of a white rectangle on the black background.

    I don't know of any performance problem with doing the scaling in-game, and I also don't know of any way to generate graphics that I can apply physics to in-game without having any static images. I'm almost positive there must be a way, but programmatically generating graphics is well outside the scope of this segment of my learning.

  • A sound effect for the paddle strike, the floor/ceiling strike, and for scoring.

    I've never used Phaser's built-in Audio engine, so this one will be a challenge. I also don't know where I'll get free audio clips, but I'm sure I can figure something out.

Step 3: What do I do?

I'll need three physics bodies: the ball, and the two paddles. The paddles will be controllable by the keyboard (mouse as a future upgrade!), and the ball will just bounce around with standard "arcade" physics. Since Phaser ships arcade physics by default (as well as something called P2 physics, which looks like it can do all the big boy physics engine things), this will be utterly trivial to implement.

Of course, there will need to be a bit of logic around bouncing off the top and bottom but scoring at the left and right. Likewise, implementing the varying angles of bounce off the paddles will require some interesting logic. I think there's a way to determine which parts of the objects collide with each other, and I think it's baked into the default Phaser physics colliders, but I'm not sure. Boy, it'd be nice to have online docs.

Generating a ball always starts at the center line and flies - at a random angle - at the last player who was scored on. This is a great implementation to really just grind the salt into your worst enemy's wounds, since it's really hard to react to the ball given that you have half the time you have during regular play. In the original implementation, you had to have been a masochist to have fun, and some people are really into that kind of thing, I've found out:

As you became better friends, you could put down your beer and hug. You could put your arm around the person. You could play left-handed if you so desired. In fact, there are a lot of people who have come up to me over the years and said, 'I met my wife playing Pong,' and that's kind of a nice thing to have achieved. – Nolan Bushnell, on Pong, quoted from Wikipedia, quoting from Next Generation magazine, 11 April 1995

Kinky.

To start with, the score can just be text rendered onto the screen. Like the original game, text in Phaser doesn't interact with the game physics, so there's nothing extra to do there. As a side note, it might be a fun exercise to make some kind of game where the game text like the score is a part of the game physics.

For the audio, I'll have to figure out how the Phaser Audio API works, but since all I need to do it play a short sound on paddle and floor / ceiling collisions, plus a sound when a score happens, I suspect I'll only need to learn a couple functions, like - just guessing here - loadSound and playSound. We'll see.

I'm off

I'm going to go off and do this and try to ignore the existential dread that maybe interactive entertainment isn't for me - an old - any more, and maybe now I'm one of the mindless consumer-only TV-watching drones that I hate.

I'll post again when I've finished my implementation of Pong with a wrap up of how it went and what I misjudged from the start.
So anyway, cheers! See you next time!

]]>
https://log.rdl.ph/post/making-a-game-in-javascript-part-1.html1CmUYLr3XSaeG0S0mA28EeTue, 17 Apr 2018 05:45:00 GMT
<![CDATA[The Seven Things That Make a "Senior" Engineer]]>

Over the past few years, I've been asked this question - "what is a Senior Front End Engineer?" - roughly a half-dozen times. Each time, my answer has evolved a bit more, but the same basic points have always been present. Sometimes the question is more specific to JavaScript (e.g. "Senior JavaScript Engineer"). My answer is roughly the same for all of these instances.

I have seven items that I believe define a Senior Front End Engineer, although with the exception of item number two, these qualifications could apply to any senior role at all.

Foresight

In the interest of context, I used to say my number one item for a senior engineer was "deep knowledge of JavaScript fundamentals." Unfortunately, when you say that these days, people who used React for a year and really, really like the let keyword and arrow functions think they fall into the "highly experienced with fundamentals of JavaScript" camp, and that's not often the case.

So, in this new era of "script-kiddie-turned-programmer" my number one thing for a Senior Front End Engineer is foresight. Unfortunately, foresight isn't something you can drop into a Github repo for review, so it's pretty tough to guage.

An example: given a choice between a monolith and highly decoupled task-specific tools, do you choose the plug-and-play monolith or do you choose the task-specific tool?
In more concrete terms: Do all of your integrations start with [insert-tool-du-jour]- or are they standalone?
To me, choosing the standalone tool demonstrates foresight. If your entire app is react-this, react-that, react-router, react-dom, what happens when one dependency has a breaking change that you can't update, or the underlying technology stops meeting your needs? Those decisions result in complete system rewrites instead of highly focused slice replacements.

I'm speaking from experience here, both having been in this wasteland myself, and looking in from the outside: An adjacent team is rewriting their product (previously Angular 1.x) because they can no longer sustain the monolith and all of its snaking and intertwined dependencies. However, they chose to rewrite it in React, and have dived back into the deep end of a highly coupled architecture, where everything depends on everything else.

There are certainly tradeoffs to either approach, but the benefit of the "monolith" approach is usually "we can do so much so quickly!" Unfortunately that also means everything that's done is often torn down wholesale and rewritten because it becomes untenable.

#1: Foresight. Quicker is not necessarily better. Viva Code Liberte.

Deep knowledge of the platform

My number two qualification is still deep knowledge of the ecosystem, but I've expanded it from just JavaScript to include HTML and CSS. These three things are equal parts of the task of developing for the front end and many (most) devs have explicitly made the decision to focus on just JavaScript. This lack of understanding of the entire platform (the web is a platform, after all), has led to a lot of tangential problems (like people inventing new ways to only write JS but still get CSS and HTML out of it by jumping through ridiculous hoops).

However, there are real, tangible problems that this lack of depth causes. I recently witnessed a developer spend a week or so writing an autocomplete component in JavaScript. When it was announced and ready for review I asked - honestly - why we weren't using the native autocomplete element built into every browser. We eventually just went with the one we had built because it was "easier to style" and - more or less - we had sunk the cost into it already.

Demonstrating deep knowledge of the platform you're ostensibly a professional in is both critical to being able to claim mastery, and critical to being able to make wise decisions that build on solid, standardized groundwork.

#2: Deep knowledge of the web platform, in all it's parts. Build on the shoulders of giants.

Communication

My number three qualification is basically "soft skills", but I tend to focus on communication, primarily written.

Being a developer is mostly about communicating. Yes, there's a part where you write code, but even that code is not primarily for doing things with code or computers; it's for communicating intent to the next person, even if that next person is you, later.

If you accomplish the "now" goal but do so in a way that nobody understands in two months or two years (when the "now" goal has turned into the "that-was-then" goal), you've failed at your job as a developer.

A relevant aside: this ties back into the number one qualification - having foresight means building code that meets the "now" goal and doesn't need to be deleted and re-written for the "new-now" future goal.
I digress.

Communication is a deep topic with many facets, but being able to actively retrospect on decisions that you've made and determine the outcomes of those decisions - even years down the road - is critical.
Communicating that introspection in a coherent way is crucial.
After significant incidents, I try to write up a retrospective: what happened (summarize, describe the intent of the document, give background and context), why did that happen, what steps were taken to resolve that at the time, and how can it be avoided in the future?
Break problems down using a root cause analysis method like the 5 Whys.

Too many times (too many to count), developers give the facts: "This broke. I fixed it. It is not broken." This doesn't help anyone in the future, and it doesn't delve into the hard parts of problems which typically aren't facts.
Case in point: in a previous retrospective document, I wound up suggesting preemptive skills testing and company-provided training on weak subjects. Without looking deeply at the - inevitably human - failures and breaking them down to their constituent parts, there's no path forward for learning and growth. Being able to communicate that is critical to both personal and professional growth.

What I'm attempting to get across - perhaps poorly, in an ironic twist of fate - is that communication about all kinds of things is important. Being able to communicate facts and being able to communicate truths - even if complex or painful, is a sign of maturity and depth. Giving every detail empowers others to extrapolate and form patterns and learn on their own.

#3: Superb communication skills. Being an engineer is communication. Being a coder is writing code.

Hype-Aversion

My number four connects back to my number one at a foundational level.
Senior engineers are hype-averse. Not hype-neutral. Hype-averse.
A senior engineer is actively suspicious of the most popular technology. They carefully evaluate it without buying into dangerously harmful cargo-cultism and hero worship. They understand the wide-reaching implications of technology decisions on a business on the scale of 10 years, not two.

#4: Acutely hype-averse.

Craftspersonship

My number five qualification for senior engineers is care about the product of their energy. Another way to put this is being a craftsperson.

Writing front end software is all about catching edge cases. Browsers and mobile phone environments are this crazy land where there are practically infinite situations that the user might be in, and practically infinite ways the user might not use the code in the way that was intended.
A craftperson knows that they need to handle all of the unexpected ways things could go wrong - not just the one way things could go right.
A craftperson pays attention to - and cares about - the details. Spelling and grammar matter, color contrast matters (what if the user is in direct sunlight?), font size matters (it should be larger than your team of 25 year olds thinks it should be).

Something I see with alarming frequency is code copy-pasted around codebases with accompanying information unchanged.
It happens all the time in tests: the test description ("should do x") is unchanged from the place it was copied from. Not only is this misleading when the tests are running / failing / being refactored, but it's indicitive of carelessness.
It happens in application code, too: some expression is coded where it operates on an object property, but there's no check that the variable is actually an object, which would lead to fatal thrown errors.

I often hear things like "I don't care about [insert something small relating to software development], I just want to be able to get more work done." This is the mark of someone who lacks the spirit of a craftsperson. Caring about every detail of the thing you are creating and putting your name on elevates that work to a higher level, and that higher level is where it should be.

#5: The mindset of a craftsperson. Caring about the tiny details and the big picture.

Bubble Awareness

My number six qualification for senior engineers is being aware of their own bubble and actively trying to burst it.
This one is pretty straightforward: Our field is mostly filled with white men ages 20-40 in the middle or upper-middle income range. These people are generally healthy or have medical support to the point where their health doesn't have a regular daily impact on their lives. These people tend to write software for themselves.
I mentioned color contrast and font size above. Know your target audience or even better: consider the needs of everyone, not just the target demographic. Text should be larger and higher contrast for all but the absolute healthiest eyes (not to mention - of course - those who can't see at all).

I was recently involved in a discussion about what the default body font size should be in our production applications, and 13px was legitimately floated as a popular option.
For a person whose eyesight is a little bad, maybe using our software outside, 13px would be virtually unreadable.


There are a lot of young developers - many of whom recently graduated from bootcamps. Young developers - some of them given the crash course in getting a product out as fast as possible - tend to reach for the tools that they know other people - or themselves - have used in the past. Often these tools are not the most efficient for the end user.
Many end users (probably a super-majority, if you're being honest with yourself) do not use your software on late-model Macbook Pro computers with 16GB or more of RAM. They're probably using a Windows computer from sometime in the last 6 years and it might have 8GB of RAM if you're incredibly lucky.
A web application that unpacks to 30MB and uses 1GB of memory during runtime is not just unacceptable, it's actively user-hostile.

#6: Aware of the limitations of their own bubble. Actively trying to expand their perspective.

Teacher

This qualification of senior engineers relates to both my third and fifth qualifications: If you care about your craft and you're a good communicator: Share.
Put together the bare minimums on something and throw some slides together and demonstrate something. Sit down with a group who's interested and talk about something. Sit down with an individual and go over something they've asked you about.
Teach.
Teach is the important word here. I've seen a lot of demos where we were in the person's code editor or some third party tool the entire time.
Teaching means breaking something down into foundational components and presenting those. Smart people will build on a solid foundation, but virtually anyone will get lost as you flip through a code implementation.
Being able to break a complex (or even a simple) idea down into a distillate is crucial to being senior. The goal of a senior is not to be king of the party, the goal of a senior is to bring everyone else up to their level.

#7: A sharing, teaching spirit. Senior developers exist to anchor and support the more junior developers.

To me, these are the seven things that make a developer into a senior engineer:

  1. Foresight. Quicker is not necessarily better. Viva Code Liberte.
  2. Deep knowledge of the web platform, in all it's parts. Build on the shoulders of giants.
  3. Superb communication skills. Being an engineer is communication. Being a coder is writing code.
  4. Acutely hype-averse.
  5. The mindset of a craftsperson. Caring about the tiny details and the big picture.
  6. Aware of the limitations of their own bubble. Actively trying to expand their perspective.
  7. A sharing, teaching spirit. Senior developers exist to anchor and support the more junior developers.
]]>
https://log.rdl.ph/post/the-seven-things-that-make-a-senior-engineer.html5DkYC9Uxqw0mOqqkAYyKyeFri, 23 Mar 2018 05:30:00 GMT