<![CDATA[console.blog]]>https://log.rdl.phRSS for NodeFri, 12 Apr 2024 16:49:30 GMT<![CDATA[URL Imports Are A Specified Native HTML Feature]]>
Over the last two weeks, four (4) - yes FOUR - front end engineers who I know to be intelligent people have adamantly stated that importing a module from a URL string is a feature exclusive to either TypeScript or Deno. Given that Deno is - roughly - equivalent to "The JavaScript runtime for people who write TypeScript," I am led to believe that somewhere in the TypeScript ecosystem, someone is spreading misinformation about what the module specification allows.

URL Imports Are A Specified Native HTML Feature

In defining the algorithm to resolve a module specifier, the HTML specification supports treating a specifier as a URL-like module specifier, which in turn will eventually parse the specifier as a basic URL.

It will work for you, too!

Go ahead, try it!
Open your developer console in this tab right now and paste in:

var { html, render } = await import( "https://esm.sh/lit-html" );
var c = document.querySelector( ".post.long" );
c.replaceChildren( "" );
render( html`<div>This is my blog now lol</div>`, c );

A strawman in defense

For a moment, we could imagine that these individuals had each said something like:

Of the available server or local JavaScript interpreter runtimes, Deno is the only one that allows arbitrary code execution based on a URL.

THAT is a true statement! Of the JavaScript runtimes available to run on your computer or your server, the only one (that I know of) that allows you to paste a URL in and immediately executes that code in the context of your machine is Deno.

It is not true that this is a feature exclusive to either Typescript or Deno, since the HTML specification has also enabled module loading using URLs. HTML has enabled it because the browser security sandbox is one of the most well fortified security firewalls ever created.

If - in the course of writing something that runs on your computer - you find that you have the need to download and execute arbitrary remote code at runtime, I would strongly suggest you re-think all of the decisions that led you to this point. Hopefully it was just one or two, and you can backtrack quickly.

]]>
https://log.rdl.ph/post/url-imports-are-a-specified-native-html-feature.html7z3zbv4Kl35kxvbFdZhwgRMon, 19 Feb 2024 21:41:00 GMT
<![CDATA[The Web Application Architecture - Intermediate Stage]]>

Part 2: Solving Part 1

In order to discuss more intermediate topics of the proper architecture for a web application, we have to operate on the assumption that your application has already accomplished the fundamentals.

The problem is that the current frog-in-a-pot front end ecosystem has convinced most developers and - by extension - their companies that they have accomplished the fundamentals, and they're ready to move on to more advanced topics. I don't believe this is remotely true. However, I'm not your dad, so I can't stop you from reading this.
Consider, though, that if your development environment has a mandatory build system, or if any of your source code has specific signaling just for your bundler, or if you use React, Vue, Apollo, or any Flux implementation, you cannot possibly have accomplished even one of the three items necessary for the fundamentals of proper web application architecture. These are by no means the full list of tools and practices, but they are certainly some of the most popular.

That statement probably causes a visceral reaction in you, perhaps even disgust and anger. I am genuinely sorry to make you feel this way. Unfortunately, the level to which web developers have incorporated the tooling that has built up in their toolbelt over time into the core of their identities is alarming, and we can't keep dancing around the fact that the tooling is a huge part of the problem.
If you've incorporated these tools into your identity, you are not the problem! You are valuable and useful! 🙂 But we do have to look at the cold truth about our tooling, processes, and architectures.

So: read on - nobody can stop you. Just know that if you don't truly nail the fundamentals, anything else you do will be built on a foundation of sand.

It's the fun part

The silver lining here is that once you've nailed the fundamentals of de-complexity, robust foundations, and inter- & intra-app communication, everything else becomes much easier.
Removing complexity and establishing a standard way everything "talks" removes much of the difficulty of removing, adding, and upgrading things. You're then left to work on the actual problems that further your software's goals.
But wait! you exclaim, my tooling also allows me to focus on my core business problems instead of all the other junk!
But does it? In every "bog-standard" ecosystem I've worked in, developers almost never solved novel problems or dramatically improved overall application performance or created uniquely elegant solutions to dramatically improve customer experience.
Almost without fail, most developers spend the vast majority of their time fitting pre-defined solutions to inflexible libraries and systems.

So when you escape from that cycle - the one where everything gets worse and worse over time until you can't recover it any more and you either leave the company or are forced into a full rewrite - web software development gets fun again.

In the first part, I was intentionally vague about how to solve these problems. It's possible the right solution for your situation doesn't look like the solution for mine.
However, architecture is about blueprints, so here's one with more specifics.

There will always be software tools

You could absolutely 100% roll your own solutions to every problem, but it's unlikely that you should for the most complex parts of an application.
That complexity should be irreducible (for example: updating the DOM), and not self-inflicted.

There must always be systems

The original sin of most of the terrible software in modern web development is that they are tools that try to solve problems that can only be solved by systems.
What's the difference?
Tools plug into specific parts of systems. Systems are ideas. You can encapsulate a system purely in documentation. You should probably also have some code that implements the system, but it isn't technically necessary.

De-complexifying systems

Here it is: What does de-complexifying my application look like?

  • Anything that solves more than one problem at a time is too complex.
  • Anything that deviates significantly from your platform is too complex.

It should be pretty clear why those statements are true, but here's a quick explainer.
If something solves two problems at the same time ("problem" here are classes of application-level concerns, like say: requesting information from an API, request caching, and data binding), any issue with one problem becomes something that affects all of the other problems.
You have an issue with how data binding into your components work? Now you also have a problem with how network requests to your API work. That intertwinedness should actively raise your blood pressure.
In similar fashion, when something deviates significantly from your platform, you become less able to adapt to underlying changes to that platform. This is the problem with all of the VDOM frameworks in use today: they deviated significantly from the platform. The platform has a native component model and they're unable to even come close to the performance or flexibility provided by the platform.
Experienced engineers warned of this a decade or more ago. Code written in defiance of platform norms is zombie code: already dead and a danger to everything around it.

Simple front end application structure

Recall that simple is not the same as easy. Simple systems separate from each other so that they can interoperate more seamlessly by agreeing on standardized interfaces. This allows them to act as discrete black-box systems instead of having their internals deeply intertwined.

Generic

You could break most web applications down into these four (really three) parts:

Four green rectangles are in a horizontal line with some space between each. The are labeled, in order, 'Data', 'Business Logic', 'Display Logic', and 'UI'. The rectangles are connected with arrows. 'Data' has a double-headed arrow connecting to 'Business Logic'. 'Business Logic' is connected with a single-head arrow pointing to 'Display Logic'. 'Display Logic' is connected with a single-head arrow pointing to 'UI'. 'UI' is connected back to 'Display Logic' with a separate single-headed arrow labeled 'UX Reactivity'.
A basic web application has three (or four) parts: Data (retrieval, storage), Business Logic, and Display Logic (which results in part four: the UI). The UI sends signals back to the display logic based on user input.

Specific

It could help to better define these parts of the application, so we can rename each more precisely:

The four green rectangles from previously now have updated labels. Respectively, they are: 'Data', 'Application Value Proposition', 'Pages, Components, & Component State', and 'DOM'. Additionally, the arrow between 'Application Value Proposition' and 'Pages, Components, & Component State' is double-headed. In a more generic system, 'Business Logic' would unidirectionally drive 'Display Logic', but with this more specific outline, 'Pages, Components, & Component State' should likely have a feedback loop to the 'Application Value Proposition'.
A more precise web application has four parts: Data; the Application's Value Proposition (why does it exist?); Pages, Components, & Component State; and DOM. The DOM sends signals back to the component or page based on user input.

Specifically correct (separation of the parts)

In these images, each part of the application is connected directly to another part. This type of connection is how most applications today are built, and it encourages deeply broken interactions and spaghetti code. This is not only because there aren't clear boundaries about where things should go, but also because it fundamentally doesn't matter: the application's UI is virtually indistinguishable from its core value proposition. If you put application logic directly in a UI component, it doesn't matter, because the UI is the business logic.
This is - of course - madness (See De-complexifying systems, just above).
A robust, scalable architecture requires that each part of a system operate in a predictable way. To keep predictability to a maximum without complex interdependencies causing literal exponential growth of potential outcomes (see: Enumerate, Don't Booleanate for an exploration of explosive exponential growth of interdependent state on a micro scale - within one component), each part should attach to a standardized interface.
You could call this "Signals" or "Actions" or "Events" or "Messages," but whatever they are, the idea is that the application has a backbone of messages passing back and forth, and when a relevant message is noticed, something occurs. Critically, this is the only way the application should operate. Every behavior and data change should be able to be followed through a clear history of messages being passed. I have talked about this multiple times before. I also built gravlift to be this carrier backbone for any system.

The same four green rectangles as previously are still present, and labeled the same. The two spaces between 'Data', 'Application Value Proposition', and 'Pages, Components, & Component State' are much wider and their connecting arrows are gone. In those spaces, and below the line of horizontal green boxes are two new blue boxes, each labeled 'Messenger'. Each has two double-headed arrows. The two on the left 'Messenger' box connect to 'Data' and 'Application Value Proposition' respectively. The two on the right 'Messenger' box connect to 'Application Value Proposition' and 'Pages, Components, & Component State' respectively. This shows how the standardized messaging interface is the go between 'shuttle' for all application behaviors.
To standardize on communication in an application where each part is separate from the others (the goal!) we add a Messenger, which shuttles standardized message packets between the Data, Value Proposition, and Page/Component parts of the application. The DOM remains an artifact of the Page/Component part of the application and does not directly interact with the core structure of the application, only sending reactivity signals back to its parent Page/Component.

Different types of display logic

The "Pages, Components, & Component State" section is doing a lot of work, so we can split that up appropriately.

The four green rectangles and two blue rectangles from previously have been split up. Everything left of the second blue 'Messenger' rectangle remains the same: 'Data' and 'Application Value Proposition' still connect through the first 'Messenger' box. 'Application Value Proposition' still connects to the second 'Messenger' box. The final two green boxes have been replaced with a set of four new green boxes in a square layout to the right of the second 'Messenger' box. Clockwise from the top left, the new green boxes are 'Components', 'DOM', 'DOM', and 'Pages'. 'Components' and 'Pages' each connect to the second 'Messenger' box with their own double-headed arrow. 'Components' adopts the configuration from the previous image and has a single-headed arrow connecting to the first 'DOM' box. That 'DOM' box has a separate single-headed arrow labeled 'UX Reactivity' that connects back to 'Components'. The 'Pages' box connects to both 'Components' and the second 'DOM' box with unidirectional, single-headed arrows. This indicates that 'Pages' can render their own DOM and other components, but components can't render pages. Only components have UX Reactivity.
Pages and components are two sides of a coin, but they have slightly different usecases. Primarily, pages can render components and DOM. Components can only render other components and DOM. Only components describe UX reactivity. DOM is still the artifact of these two classes of display logic.

Defining data

For many people, defining data is difficult. This is made especially difficult because many tools improperly conflate the concepts of application state (what is the application doing right now) and core application data (what is the information that any user is here to see). These tools incorrectly treat these two concepts as the same, and store them the same way and provide the same interfaces for accessing and mutating them.
The clear defining line is "If you wouldn't store it in your database, it's not data." Loading indicators, which sections of a page have been collapsed by the user, which temporary view modification is enabled, and more are all examples of data you wouldn't store in your database. These are component-level state concerns and don't belong in data.
Data is a broad abstraction of three separate things. Your application's value proposition should not be aware of these distinctions; that's why the Data interface is separated by the standardized Messenger.

  • Browser Storage
  • Single-Concept Utilities
  • API
The previous image's structure remains. The 'Data' box now has a trident arrow to three purple boxes. They are labeled 'Browser Storage', 'Single-Concept Utilities', and 'API'.
The Data interface blends access to a back end API, browser storage (for local caching), and single-concept utilities.

The "single-concept utilities" above requires a bit of explanation. It is very common to need to do certain things with known types of data that should happen based on raw data at this low level of the application. For example, you may be building an app that allows users to view and rate dogs (they are all 12/10). After the application gets data from the API, it may need to import the normalize function from Dog.js and make sure to smooth over any potential rough edges in the data from the API. Dog.js might also provide some helpers like makeNew for when someone is submitting a new dog, and the system needs a "default" Dog value for them to fill in.
The two-fold key is that these utilities are tightly focused (for example, just Dog.js, not normalizeData.js) and they are limited to data. There's a whole part of the app for the value proposition code, this area should just be for unavoidable data mungeing.

More elaborate components and component DOM

Components hold a bit of information (the state of the application, which we covered earlier in that we know it is not data). Component DOM also updates regularly, which means we should define how that happens.

Everything from the previous image remains intact. A purple box labeled 'Finite State Machine' is connected to 'Components' with a single-headed arrow. Two purple boxes labeled 'Surgical Updates' and 'Web Components' are connected to the Components 'DOM' box with single-headed arrows.
Application state is encapsulated in component Finite State Machines so that it can be both catalogued and repeated. DOM updates are done as little as possible - updating only what's necessary, not recomputing and re-rendering the entire tree. Components use the web-native component model.

A blueprint for long-lasting, high-performing applications

And so we have arrived at the simple application architecture. Had you followed each of the principles in part 1, you may have arrived at a similar architecture. It is not necessarily easy, but the best things never are. It is undoubtedly elaborate, but most meaningfully complex web applications are elaborate. In this structure, the complexity is kept from seeping into every corner of the code.

  1. The application is split into four discrete parts:
    1. Data
    2. the application's Value Proposition (the Business Logic)
    3. the display logic, split between Pages and Components
    4. the rendering artifact of that logic: the UI, which returns reactive signals back to the components
  2. Communication is passed among the Data, Value Proposition, and Pages/Components by a message system with a standardized format
  3. Data reflects database information. It is not application state. The Data application layer abstracts:
    • accessing the API
    • Browser Storage (local cache, for one example)
    • Single-concept utilities, which help normalize and interact with data constructs
  4. Components use Finite State Machines to catalogue non-exponential lists of application states, the non-data type of information your application must manage.
  5. Component DOM updates are managed by the family of Web Components browser features (like Shadow DOM, custom elements, and constructable stylesheets). These updates are not full-component re-computes (like VDOM libraries), but instead maintain connections to live versus static parts of the template, and only update those parts that have received changes.
]]>
https://log.rdl.ph/post/the-web-application-architecture-intermediate-stage.html2YAj6hJBL4THWvqzN9aigSTue, 23 Jan 2024 04:13:00 GMT
<![CDATA[2023 ↦ 2024]]>

2023: A Recap

Last year, I had a few goals, mostly around consumption and creation.
I think it's fair to say I accomplished them all, because I am nothing if not a realistic goal-setter.

In fact, it was true that I had accomplished two thirds of them by the end of March, which perhaps reframes my realistic goals as unchallenging goals.
I am nothing if not comfortable in a life without self-imposed challenge.

I did read read two more books after I posted that in March, but that's an average of .22 more books per month which is uninspiring.
As I noted, I tend to get stuck when a book irritates me, and then never pick it - or any other book - up again for a long time. Last year, that book was the 10th anniversary edition of The New Jim Crow by Michelle Alexander.
I hadn't read The New Jim Crow originally, so I picked it up to "catch up." Unfortunately, this version has a 37-page new preface, a mercifully short 3-page foreword, the original sub-1-page preface (so you do know how!), and then a 23-page introduction.
I am not sorry about this: if I've read 64 pages before I get to your book something has gone terribly wrong. Also, here's the new preface: "Things have not changed in the last 10 years." You see how that's not 37 pages?
Anyway, I got pissed off at this author and put the book down and didn't pick another book up until late December.

I completed 10 games in 2023, which - as noted in March - was certainly far more than 2022.

Perhaps most importantly, I got my side project - a game tracker (?) - to a version 1 usable state. Version one isn't fully complete, but it's mostly functional with just a couple rough edges.
The most restrictive thing that I think most people would take issue with is there is no public-facing user data. The list of games I completed in 2023 that I linked above has a title (and of course I created it, and I have a username), but I've explicitly chosen not to share those pieces of information publicly. As soon as I show user-entered data to people other than the user who entered it, I have a content moderation problem, and it's one I'm currently unprepared to handle.
Future iterations of this application will ship moderation and safety tools, but for the moment it's single- not multi-player.

Into 2024

Last year was mostly about finishing things. Finish more books. Finish more games. Finish a (first draft of a) side project.
That's good, but it means turning things up a notch in a way that I'm not sure is sustainable for me.

This year, I have goals around improving what I already do.

Continue reading

I'm going to give myself a little break and not set an arbitrary goal like "more" or "1 a month."
I should read more, but I'll settle for just as much.

One way I'm going to try to facilitate this is by allowing myself to let go of untenable goals.
If 2023 taught me anything, it's that I know when I'm in a situation that doesn't work for me, but I lie to myself that I just need to give it another chance or take a little break and come back to it and it'll all be fine.
I need to trust my instincts more.

Be more mindful of my health.

I did some road cycling in 2023, but I also consumed a lot of extra calories. I went to doctors less than I should have (although probably more than a lot of people). Good news: it wasn't cancer!
This year, I want to be more mindful about my body (and how I feel in it), the things I consume, and my long-term health. I'll be 35 this year, and as I move into true middle-age (oof), it will be harder and harder to undo past mistakes.
Ultimately, I want to live with a more healthy mindset so that I don't have regrets about my health, my body, or my capabilities.

Say yes more often.

Many, many years ago, I realized that I had prevented myself from a full breadth of experience in life by saying no as a default. Trying to counteract that was why I moved states away from almost anyone I knew, it's why I bought a motorcycle (a calculated high-risk activity), and why I did any number of other things over the last decade-plus that were slightly out of my comfort zone.
Don't get me wrong, I have an incredible credit score (😉), I'm completely healthy, and I have a university degree and a privileged career and life. Living in a way that minimized my future risk did pay off, but my story is not interesting.
That is - in some ways - my biggest shame: I don't feel like I have an interesting story to tell about myself. I am an excellent listener, and I know how to ask a prodding question to get other people to keep talking, and I like that. I learn other people's stories and I'm a willing ear (and shoulder) for a lot of people. But I sometimes think I am this way because I don't want to talk about myself. I feel like I don't have anything interesting to say, so I'd rather other people talk instead.

So this year, I'd like to say yes more - where it's appropriate. I want to increase the risk I take on, even if that risk is just a false perception where my brain is telling me I shouldn't do a thing because what if somebody sees me and they don't think I'm cool enough 😱?
Even if (especially if) it pushes me outside my comfort zone, I want to build a life that fosters experiences.

Make a dollar on the side.

Now that my side project for tracking games is to a totally usable (if not polished or fully complete) state, I want to make a dollar in revenue from it this year.
Yes. One dollar. A single United States smackeroo. If I make more, great.

By setting my goal to make any money at all, I will be forced to set up the mechanism for it, get the application onto a real domain name, and maybe even advertise it a little.
On that note, here's a testimonial from a real user who had extremely early alpha access.

I'm really enjoying your games website - I didn't realize I wanted it until I started using it, but being able to keep track of games I want to play/keep an eye on is super nice.

In a way, this has shown me a whole new problem: People don't know they want my product, but I think that's the entire premise of marketing...

This year, I'd like to set the gears in motion to free myself from working for other people.

Alright, that's it for now. See you soon.

]]>
https://log.rdl.ph/post/2023-2024.html5o07sg3VzuRS42rg4YfaRSMon, 08 Jan 2024 01:25:00 GMT
<![CDATA[What are your web development first principles?]]>

What is the unshakeable foundation on which you build software for the world (wide web)?

Recap

We should use a very loose definition of first principle, because in theory a first principle cannot be deduced from any other assumption in a given system. As I mentioned in ... There Are Fundamentals of Web Development, you could - in theory - look all the way back to physics (electrons) as your first principles. Sure, electrons exist. Let's move up our stack a bit.

These Truths Are Self-Evident

Here are some unshakeable, foundational, first principles of Web Development:

  • Modern web browsers are some of the most complicated, robust pieces of software on the planet.
  • The vast, vast majority of users are using slower, older devices on unreliable - and often very slow - networks. See The Performance Inequality Gap, 2023
  • Professionalism is a combination of competence, respect, and ethics.

Web Development as a Career

If software engineering for the internet is a career, then we must assume you can be a professional in it. As noted, there's a basis of professionalism, so it behooves us to define professional web development:

  • Understand the inescapable tools of our trade (web browsers, the languages that run therein, security, how the internet works, etc.).
  • Work as agents on behalf of users who do not understand our field or the complexities of it.
  • Practice ethics and respect toward our product and the users who use it, and demonstrate competency in the platform we depend on.

The modern web is a scathing indictment

The utter disaster that is modern web development is an embarrassment of historical proportions. Lest you think I believe this is exclusively that one framework's fault, allow me to be clear: Web Developers are obsessed with their own experience at the expense of everyone else. This is true regardless of the framework they use. It's true that React is by far the worst and the most popular, but that does not excuse the rest of the bunch.

Professional software engineers should be able to discuss the features of their platform without having to explain first principles ("Yes: the web browser does that by default."). They should be able to assume a baseline for users ("No: most people are not using an iPhone 15 Pro Max, most users are on an old Android phone in South Asia."). They should understand the societal and political implications of the results of both their conscious actions and their many other decisions, whether by inaction or by implicitly choosing the default ("Yes: those people do matter, and shipping software that can't be used by them for any reason under your influence is an outcome caused by racism - or ableism, or ageism.).

Professional web developers shouldn't have to - but often must - push back on vanity metrics masquerading as equitable access. A fast application up front that continues to load 100% - 200% - more later is not a fast, equitable, accessible application. It is built for the top earners in primarily Euro-centric countries. This is what subtle racism looks like in the real world. It can't just work for you and people like you.

The P75 latency for interaction (in an evenly distributed population) isn't the continuous experience of a single user; it's every fourth tap!
small, individually innocuous changes add up to a slow bleed that can degrade the experience over time
A Management Maturity Model for Performance - Alex Russell

Doomerism

I'm hardly the first - or the most eloquent - person to write on the topic of "This all sucks." I also don't have a magic solution to it all. Of course I'd say be relentless in adhering to the foundations of web application architecture.

Maybe I'd also say: Really, really pay attention to who benefits from decisions, in-decisions, and defaults. Are you willing to sacrifice - in your professional capacity - your own comfort to better serve the human beings you are acting as an agent for? To be clear, you are not an agent for shareholders. Ethics, responsibility, and competence demand we work on behalf of the underserved.

Are you making the difficult decisions every day to show that you're a professional web developer?

]]>
https://log.rdl.ph/post/what-are-your-web-development-first-principles-.html5nCLWp4jD5evgNkmnQkQoDWed, 18 Oct 2023 17:50:00 GMT
<![CDATA[The Web Application Architecture]]>

Part 1: The fundamentals

Because this is a topic that is both broad and deep, it's not efficient to try to pack everything into one entry. Instead, for this first post, I'll just cover the three most foundational topics for a front end application.
That is:

  1. De-complexity
  2. Robust foundations
  3. Inter- and intra-app communication

De-complexity

The first principle of any web application must be a strict adherence to simplicity. I beg of you that you watch this video of Rich Hickey giving a talk titled "Simple Made Easy" at Strangeloop 2011.

One of the many take-aways from that talk is this: Simple is the opposite of complex, and complex means "interconnected" or "intertwined." Simple code is not monolithic, because monoliths touch many parts of your application, inherently increasing the complexity of it. Moreover, simplicity is not how easy something is. They are not opposed ideas, but to avoid complexity, you must sometimes choose simplicity over how easy something is.

Robust foundations

In tandem with the rejection of simplicity in favor of temporary easiness, the lost decade of development also built a culture of platform rejection. Despite building for the web, the companies and thought-leaders driving the industry pushed for less and less dependence on it. Throughout the years, this gave rise to many attempts to bridge the gap back to the platform like prop drilling, global variables (Flux, Redux, Vuex), "polymorphic" code, and on and on. Each aimed to solve a problem introduced by the fevered dogma that the way things were done before was bad, actually, and we've figured out the right new way. Never you mind all these glaring issues, we'll patch them over with the hottest new thing you need to add to your stack. Developers constantly longed for a return to the platform, but they were sold a pack of lies that the platform was broken and they needed these bloated tooling solutions (and of course their glorious leaders) to make good software.

Features that should have been implemented on top of browser-native tools were instead implemented - shoddily, slowly, and in user-hostile ways - in authored code, running through a slow compiler (sometimes multiple slow compilers!).

The creator of JavaScript often says: "Always bet on JS". As he notes, this is not because JS is necessarily great, but because it is established and evolvable.

Indeed, JS - and the entire web platform - has been evolving (quite rapidly) while the anti-web web developers have been churning through libraries and patterns to try to re-implement the web on top of the web, while trying to avoid using the web.

There are far too many deeply technical topics to go into thoroughly here, so here is a short list of facts:

  1. Your user interface - if it requires components - should be fundamentally based on the family of Web Components technologies: custom elements, Shadow DOM, and their associated tooling like constructible stylesheets. This is core to the platform and a non-negotiable skill nearly a decade after their wide adoption and support.
  2. Data is separate from application behavior, component structure, or network access. This means that data must be managed as a separate entity from these other parts, just like it is when at rest (in your back end database). In practice, you will likely use robust tooling like IndexedDB to store long-lived data, localStorage to store short-lived data, and sessionStorage to store even shorter-lived data. Components should have no idea this exists.
  3. Front end routing is not a user interface component concern.
  4. Network (or any other side effect like disk access, peripheral control, data storage & retrieval, etc.) is not a user interface component concern.
  5. The user interface has a particular state which is wholly and completely separate from your application's data. They should never mix or even be stored in similar ways. A component is a machine that can move through pre-determined states in pre-determined ways, represent it as such, specifically for that component. You could use a tool like a finite state machine.

Inter- and Intra-app Communication

Other than a general focus on simplicity and relying on the robust platform itself, one of the most fundamental parts of building a web application is addressing how things communicate. Some of those "things" will be within the same app, and some of those "things" will be other apps, but they all need to communicate.

While "app communication" may not be the most important concept (that's undoubtedly a fervent adherence to simplicity), it is without a doubt the most important section to grasp and implement correctly, which is why most of this post is dedicated to it.

The previous sections have been - largely - conceptual. Application communication moves from concepts like "simplicity" and "using the platform" to critical direction. Failing to properly solve the problem of application communication in a versatile and flexible way that still maintains application simplicity is a matter of survival; applications that fail in this manner always have deep performance and user satisfaction problems and it is always sooner than expected and they are always complex, tangled messes that are intractable. The "dreaded full rewrite" is a trope not because it's fun, but because developers fail to architect the communication fundamental and reach for the same broken solutions.

Many supposed solutions to this problem have arisen from the ashes of what modern development torched as "the past." There were solutions that relied heavily on the concept of data changing (Flux, Redux, Vuex). There were solutions that depend on API responses - cached or not - only to eventually be essentially the same concept (GraphQL, Apollo). There were solutions that demanded extremely tight coupling, which resulted in code patterns like property drilling and logical mixins.

In nearly all of these so-called solutions, imperative commands are integrated directly into where they are needed in the moment. This is easy and convenient for developers, but leads to out-of-control bloat and - perhaps more importantly - disastrously fractured code that attempts to interface with the application in just one way, but tries to solve many problems that become increasingly specialized over time. The costs of this bloat and complexity are never borne by the developers, they are always passed onto the consumer.

The actual solution - which is foundational to the entire internet and web browsers - is to use events.

If an application strictly adheres to an event system, it cleanly separates interactions from actions, and triggers from behaviors. Safe in that separation, the application is free to add more specialized actions and behaviors, bifurcate interactions and triggers, and generally grow linearly and - critically - simply.

It may be helpful to think of this architecture as similar to one of the most successful architectures of all time: the human body.

Take a simple action that's often used as an example: touching a hot stove. There is no embedded command in the hands or fingers for "when you touch a hot stove." What happens is that the nerves in the fingers identify a trigger: "this temperature is extremely high." That trigger is passed along to the brain just like that: "the fingers have sensed an extremely high temperature." The brain then decides how to handle that. In the vast majority of people, the brain converts that message into "pain!" Consider that "pain!" is not handled by the fingers or the hands. It is the central nervous system's generic response to many events. It just so happens that the brain responds to "the fingers have sensed an extremely high temperature" with the generalized reaction of "pain!" Finally, the brain (when functioning normally) sends a message back down along the nerves: "the hand should be retracted!" This is managed by the particular appendage, which knows how to receive messages like "retract hand!"

A flowchart with a large 'Brain' section at the top. Below the brain on the left side is a section titled 'Knowledge and Abilities'. Below the brain on the right side is a section titled 'Body'. The two sections are separated by a red barrier. They do not directly communicate with each other. Each section has bi-directional communication with the Brain. Knowledge and Abilities communicates with the brain by receiving triggers and sending reactions. Body communicates with the brain by sending triggers and receiving reactions. Knowledge and Abilities has two main items: 'Pain!' and 'Retract Hand'. It also has two other items, which define how it handles triggers. These latter two items are structured with up to three sections: when, send, and act. The first of these items has all three sections and it reads: 'When Pain! Send Retract Hand. Act by creating a negative association.' The other item creates a more contextual trigger. It reads: 'When extreme temps, send Pain!' The body section has its own abilities, but is mostly handlers. The body has the 'contract muscle' ability. The body has sub-sections titled Right Hand and Right Arm. The right hand has two items in the same 'when/send/act' structure as the brain. The first of these items reads: 'When extreme temperature, send extreme temps.' The latter reads: 'When Retract Hand, contract muscle x, contract muscle y.' The Right Arm section also has one of these items. It reads: 'When Retract Hand, contract muscle z.'
An overly-simplistic outline of how the brain and body interact with messages.

This brain/body structure works well because each part of the body doesn't need to know what everything means. The brain knows that extreme temps mean pain and knows how to react to that, but the hand simply sends along the thing that's happening and knows how to contract muscles.

This separation is important, because it makes humans flexible. We can learn new reactions and associations without growing a new appendage. If we lose a hand, we can adjust how we live our lives because what we need to do is handled by the brain, which can adapt and send the same common reactions in unique combinations to get the same job done.

This flexibility is possible in web applications, too. Consider a similar flowchart, but for reloading some data in a display.

A flowchart with a large 'Events' section at the top. Below the events on the left side is a section titled 'Business Logic'. Below the events on the right side is a section titled 'Components'. The two sections are separated by a red barrier. They do not directly communicate with each other. Each section has bi-directional communication with the events area by sending and recieving events. The business logic section has one main ability and one handler. The ability is to 'Update our-data.' When that ability succeeds, it sends the event 'our-data:updated.' The handler item has three parts: when, send, and act. This handler reads: 'When we recieve the event our-data:request-update, send our-data:updating and act by triggering Update our-data.' The components section has a component called 'Data Table'. The data table has three of the items that have the when/send/act structure. The first item reads: 'When the user clicks the reload button, send the our-data:request-update event.' The second item reads: 'When we receive the our-data:loading event, disable the reload button and show a spinner.' And the third item reads: 'When we receive the our-data:updated event, enable the reload button, hide the spinner, and update our data.'
A simple example of the "brain/body" structure of an application event system, showing a "data reload" cycle of triggers and behaviors.

Note that this application does not have to connect "reloading Our Data" to the data table button, or even to any UI element at all. It's just an event that can happen as a result of anything, including user interaction. Likewise, note that the data table doesn't update when the data for it changes, it updates when the application tells it that an update occurred. That doesn't necessarily mean the data changed, just that the application considers the data updated. That's a powerful nuance that disconnects triggers from behaviors. The data table could refresh its data display at any time the our-data:updated event is observed, regardless of the trigger.

Like a body and brain, this strict separation allows flexibility, adjustment, and remixing. New combinations of reactions are possible without affecting what triggers cause those reactions or where they originate.

.finally

This is not a comprehensive entry on this subject. The web application architecture is an extremely broad topic, and it has deeply technical discussions embedded in it.

But, for a beginning, this will have to do. Focus on these three things to eliminate the technical debt of bad early decisions that compound for years.

  1. Simplicity above all
  2. Use the robust platform features
  3. Communicate only with events
]]>
https://log.rdl.ph/post/the-web-application-architecture.html47UEx9qZacOGC7rfL0he5TWed, 23 Aug 2023 18:09:00 GMT
<![CDATA[Game Dev devlog #1]]>

When I wrote an update on my 2023 resolutions, I included a little tidbit about being burned out on web development.
It's still true, but I want to write down what I've been learning about Godot before I forget.

Getting started with Godot

Whoa! Wow!
It's so easy to get started with Godot by just following their "Your first 2D game" tutorial. I'm not even worrying about 3D at the moment, since it dramatically increases the complexity of any given thing.
Maybe more than anything, what I appreciate the most about this tutorial is that I feel like I've learned many of the basics that I'll carry through with me as I do many more things. That is: I didn't learn how to make the Godot 2D tutorial, I learned how to make any 2D game in Godot.

I immediately took what I learned from the tutorial "Dodge" game and made a "Catch" game (you are a bucket, catch falling objects, score goes up). While the main principles are essentially the same (read: it didn't require much to switch the gameplay), the key is that with just a brief couple of hours with Godot, I felt like I had a decent grasp of the fundamentals. That's pretty amazing for any tutorial, much less a game engine.

More advanced topics

The next game I made was a little side-scroller with cute graphics that I bought from Cup Nooble on Itch.io. To be clear here, I'm not trying to make games to make a game, I'm making games to learn concepts - which I'm self-assigning.
In this case, I wanted mostly to learn the tile system and cameras plus a bunch of stuff all related to collisions (like hitboxes, item pickups, etc.).
This was the first time I felt really lost, and it wasn't with anything you'd probably expect. Spawning a mob in and picking up a weapon and particle effects and hitting enemies and moving the camera to follow the player and so much more was really quite easy. Granted, the code is a bit of a mess, but if I've learned anything from my decades in dev, it's to not focus too much on code quality during the "dabbling" phase.
No, the part that stumped me is Godot's tile system and editor. I cannot - no matter how much I try to understand it - figure out how the hell to get it to automatically paint reasonable maps.
Ah well, I learned a lot of other stuff in this little dabble, and - while it's unbounded and monotonous - I think the resulting little toy game is actually pretty neat! (It's fun, too, but only for about 3 minutes)

More advanced!

Once I had something "game-like" under my belt, I wanted to start learning more science-y things. On the roster is stuff like procedural generation and shaders (which I hear are the secret behind many visual effects).
I decided to start with procedural generation because I saw this video and HOW HARD COULD IT POSSIBLY BE, IT'S JUST MOORE NEIGHBORHOODS AND CONWAY'S GAME OF LIFE!
So it turns out that parsing a fixed region using Conway's Game of Life rules to cellular-automata my way to a reasonable map is not very hard. It's quite easy. The problem is procedurally generating new regions as the camera moves and making the new regions merge with the old regions in a sensible, imperceptibly seamless way. That's the hard part, and while it's not an impossible problem to solve, it's where I'm currently stuck.
I suppose "stuck" isn't exactly the right phrase. I've stopped working on this for the moment, mostly because I wasn't getting really fast feedback on my code any more, and I had other things I needed to get to. I've built code* that takes a region and generates a map for it, and it seems to work pretty well in combination with existing regions (it uses a physical buffer to load in any possible existing tiles around the region to be generated, using them as seeds for the cellular automata, but not modifying them).
The next thing to tackle is computing a new region to generate based on the camera location: it needs at least 8 points spread around the camera that trigger a region generation when they encounter blank space, but the regions have to be anchored in the original grid dimensions otherwise they'll randomly overlap, creating absolute mayhem. That means I'll need to work backward from "a trigger point" to "what's the least number of originally-sized regions starting from the original spawn point that will get me to a point that will cover this point?" I imagine this is very inefficient and I have no idea how this is done in other games.

Which leads me to probably my two largest take-aways from my time learning game development so far:

  • It does not seem like game developers share a lot of their knowledge freely
  • and/or You really have to learn specific game development terminology to have a hope of finding any information

As an example of the latter, I was trying to "mask" the pop-in of a sprite (that's a game dev-specific word for "image") and was basically searching for "create temporary sparkle effect [in Godot 4]." By some miracle, I did stumble upon some tutorials for creating magic spell effects, which led me to particle effects and particle generators. But, I probably wouldn't have just discovered that on my own, and it didn't feel like I organically got to that result based on my search results. It just so happened that I specifically wanted a "sparkle" and that took me to "magic wands" which eventually got me to particles.
It feels like game dev is begging for someone to make some content optimized for beginners to come in and ask beginner questions and get useful answers right away. Here's a search that feels even a little advanced for a total beginner to type in and literally none of the results look even remotely useful.
To be clear, here's more or less what I was trying to accomplish:

A short bit of gameplay from my first game, where little cows spawn in randomly, but their jarring spawn is obscured by a big green sparkle effect so you can tell they appear, but it sort of seems like maybe they teleported in or something
In this very peaceful not violent at all game, I want cows to spawn in randomly around the player, but if they just show up out of nowhere, it's a bit weird. To obscure the pop-in, I added a big green sparkle particle effect to cover the cow when they appear. It fades quickly.

Forward!

I'm still incredibly excited about game development, but as with most new things, the instant dopamine hit and ultra-fast feedback loop has slowed dramatically. I still enjoy the challenge of solving a game dev problem, but I'm not as obsessively pulled into it when I have other things to do (like sleep).
I'm still itching to get back to finishing my procedural generation (I'll probably call it done once it's infinite; doing different biomes based on smaller slices of the noisemap sounds interesting, but it's too far into the weeds for me.), and I'm very amped to learn about and play with shaders, so when I have time, I'm going to continue forging ahead!
I've been building up some ideas in my head for a game I'd like to make, and I feel like once I have the skills to actually get it out into a program, I'll be able to refine the "mechanics" and "gameplay loop" and "goals" part of game development.

*: I may open-source this code at some point, but with the current ecosystem of web scrapers masquerading as artifical "intelligence" stealing code and handing it to untalented hacks as if it created it, I'm not sure how I feel about my code being available for free any more.
†: Infinite procedural map generation is cool, but a game really needs to be begging for an infinite map and have real story reasons to include it. Well-crafted maps / levels are almost always superior, so getting far down the rabbit trail of procedurally generated biomes feels like a waste of my time at this moment.

]]>
https://log.rdl.ph/post/game-dev-devlog-1.html3npdNYkatWichyuhC7MG64Fri, 31 Mar 2023 20:52:00 GMT
<![CDATA[Maybe I set my goals too low]]>

I resolved to...

Near the end of January (I did not resolve to be more punctual), I posted My Resolutions for 2023. Since it's now near the end of March - quarter 1 - I thought it would be a good time to check in.

Read more books

This one has been pretty easy for a number of reasons:

  • I just generally want to do fewer digital things and more real things.
  • Travel is a bit more "on the table" this year, and I can usually finish a book on a plane ride or two (plus having fewer available forms of entertainment while traveling leads to more reading).
  • I had already read one book by the time I wrote down my resolutions.

As you can see, I've already completed this goal, which feels like maybe I didn't set the goal far enough outside my comfort level. Maybe I should update this goal to be "Read 12 books in 2023," which is an average of 1 per month, and I think that's totally reasonable.

Anyway, here's a quick review of the books I've read so far:

  • Jurassic Park: As noted, it's a banger. Dinosaurs go hard.
  • The Road: I'll just copy a text I sent to a friend here - "Pulitzer prize winning drivel, boring, empty, devoid of purpose both theoretically (there is no story) and practically (you can't get anything out of reading it)."
  • Hold Me Tight: It's more of the standard model of modern psychology which can be essentially boiled down to "adults are just larger versions of small children 3-14 years old and all of their emotions and feelings are essentially identical and fundamentally unchanged." This suggests some extremely uncomfortable obvious extrapolations that it seems this "current wave" of psychotherapy seems uninterested in exploring and I'm tired of it.

Finish more video games

This is a great goal, because while I like video games, I don't revel in every single second of their existence. My favorite part of this resolution is "Allow myself to not experience every possible piece of content in a game." I am an Unwilling Completionist: I feel the need to 100% a game, but I don't enjoy it along the way. There are certain games where it's natural and enjoyable to 100% the game, like Metroid Dread. I will admit that when I wrote the review, it felt like a chore, but in retrospect the chore is almost entirely alleviated by just having learned the skill which is the correct way to gate items / 100%.

I've only started tracking "completed games" this year, so it's hard to do a direct comparison. However, I think at best I probably only finished 1 or 2 games last year (it's possible it was zero).
The trick here is that the games I've finished this year (and games I expect to finish) were all played mostly last year. I got near the end, wasn't in a position to 100% them, and the fun was all sucked out because I "had" to go back and try to get some obtuse side missions or something. Relieving myself of that "requirement" immediately opening up multiple finishes because I was just hours (or sometimes minutes) away from finishing the game without 100% (or - in the case of Cyberpunk 2077 - a specific ending).

If I want to be cheeky, I've also completed 3 other games this year. I've been dabbling with Godot and have built some "finished" "games," which I think is a neat cheat for this 2023 goal.

Use my own game tracker exclusively

The screenshot shows a few columns under a page titled 'The road to 1.0: A game tracking web app that doesn't suck?' Of note under the In Progress column is a user's personal dashboard and game status changes on that dashboard. Of note under the Not Started column is Lists.
The current task tracker for the development of my own game tracker app.

Development here has stalled a bit since I've been mostly burned out on Web Development. Web development really sucks these days, and it's a pretty miserable field mostly dominated by finance bros and sleazy venture capitalists. I've been spending most of my free time doing game development with Godot instead, and I have to say that doing a type of development not personally linked to some piece of shit accomplishing their strategy for a market exit is nothing short of a life-giving breath of air.

That said, I do still want to start using my own app exclusively this year, and there isn't theoretically much left to do, so I expect to be able to still accomplish this, even if it's later than it otherwise could have been.

Basically accomplished!

Other than my game tracking app, I've pretty much accomplished my defined goals, which is nice because I don't feel a burden of guilt, but also not the best because I don't have an extrinsic reason to keep going on reading more or finishing games. I think it's fine, though, because "read more..." and "finish more..." are open-ended enough that I can keep going.

Here's to the rest of 2023!

]]>
https://log.rdl.ph/post/maybe-i-set-my-goals-too-low.html29KC9VHPDIWRtmdJqKHgGvSun, 26 Mar 2023 19:36:00 GMT
<![CDATA[It seems pretty clear that there are fundamentals of web development]]>

Laurie Voss recently wrote There's no such thing as the fundamentals of web development.
In that post, he makes the claim that there are two ways to disagree with him: on what fundamentals are, and on what web development is.
The beginning of the post is spent - ostensibly - showing how neither of those arguments apply. He also later frames any disagreement with his position as "gatekeeping" and "for the weak," which is convenient, since it's a built in ad hominem attack!

Anyway, in the section with the heading that reads "What is a fundamental?", the post goes into great depth making a comparison to vehicles. In fact, the entire section is about vehicles. It is not at any time about web development, but let's go ahead with talking about vehicles.

That's not what your supporting evidence says

Let's begin with the easiest problem: the definition of "fundamental" that Laurie writes in his post is not supported by the source he says he drew it from.
Laurie wrote:

I apologize for reaching for a dictionary definition of fundamental, but it is "forming a necessary base or core; of central importance"

This is not what the Merriam-Webster dictionary says at the link provided. Merriam-Webster reads:
serving as a basis supporting existence or determining essential structure or function

That's the first definition. Down at number 3 is
of central importance

It's probably worth noting that definition 1B is
serving as an original or generating source

Definition 2 (A, B is a description of religious fundamentalism) is:
of or relating to essential structure, function, or facts
and
also : of or dealing with general principles rather than practical application

I would apologize for reaching for a dictionary definition, but... you know... he started it. (Also I am not sorry, and apologizing would be disingenuous).
While we're here, here are some other dictionaries' definitions of "fundamental":
Dictionary.com:
serving as, or being an essential part of, a foundation or basis; basic; underlying

Cambridge Dictionary:
forming the base, from which everything else develops

It's pretty clear that Laurie created a definition of "fundamental" that more readily fit with his premise that there are no fundamentals of web development.

If we take all of the primary definitions of "fundamental":

  • serving as a basis supporting existence or determining essential structure or function
  • serving as, or being an essential part of, a foundation or basis; basic; underlying
  • forming the base, from which everything else develops

It's quite clear what would define a "fundamental": A basic thing that defines the underlying function. It would form a base, you know, from which everything else develops.
Clearly there are fundamentals. You could talk about physics (electricity) or processors (and their transistors) or networking or assembly code or higher-level programming languages or web browsers or the languages that run the web (HTML, CSS, JavaScript) or frameworks or no-code website builders. But no matter where you draw the line it is obvious that development does not spring forth fully formed without any fundamentals.
It does not spring forth fully formed in the same way that basic algebra does not spring forth fully formed. Sure, you can absolutely stumble into algebra on accident, but that doesn't mean that a number system with defined rules ceases to exist. The fundamentals are still there; you knowing whether the fundamentals exist or not does not change their existence.
Perhaps the second Merriam-Webster definition bears repeating here:

of or relating to essential structure, function, or facts

The facts, function, and essential structure of web development do not exist based on your understanding of them. They just are.

Carving a lot out of a car is not the same as no car

As noted, this section talks only about vehicles. It's a neat metaphor because many people understand cars and - sure - we'll go with it. The problem that I will gloss over for the sake of length here is that the car metaphor is never linked back to web development. The reader is left to connect all the dots for themselves, which is a clever way to tap into our motivated reasoning so that we can assign meaning and connection all by ourselves. We all think it means whatever we thought going in, so we nod along, building agreement momentum.
So if I'm glossing over all that, what is the real problem?
The problem is that this whole section is just about defining what a car fundamental is not. Let me briefly demonstrate what something is not:

  • It is not a dog.
  • It is not grass.
  • It is not a copper pipe.
  • It is not the emotion love.
  • It is not a mitochondria.

Have you gotten it yet? It was a paper airplane.
A section dedicated entirely to defining what is not a fundamental of a vehicle does not magically prove that there are none, it merely proves that the non-exhaustive list of things hasn't yet included one. Since we've already gotten pedantic, let's be clear this is a ignoratio elenchi argument - an irrelevant conclusion. Sure, those are not the fundamentals for driving a car, what of it? It's a fun game to see how many more you can spot.
This entire section does nothing to prove that there are no fundamentals of web development, it merely defines a lot of what are not fundamentals of driving a car and expects the reader to conveniently connect some unrelated dots.

Please elaborate on this thought that says there are some fundamentals

This isn't even my strongest argument, but it's the funniest one, so what the hell.

In Laurie's section about vehicles - meant to promptly dismiss our idea that there might be web development fundamentals by short-circuiting our brains to agree with unrelated nonsense - he writes this line:

Fundamental to driving a car is understanding how to operate it: what the accelerator does, what the brakes do, and what effect turning the steering wheel will have.

So, in the metaphor where we are talking about cars as a stand-in for web development, Laurie wrote: "Fundamental to driving a car is [...]"
Let's dispense with the obfuscation: "Fundamental to web development is [...]"

Expand on that, there might be something there.

]]>
https://log.rdl.ph/post/it-seems-pretty-clear-that-there-are-fundamentals-of-web-development.html7AsbdU8e2YJc1EsY5Jcyb1Thu, 23 Mar 2023 17:51:00 GMT
<![CDATA[We are on the Cusp of a New Golden Age of Indie Games]]>

I've "tried" to learn game development at various points in the past, between the bones of a tower defense game in pure C in college (you know what, it was actually not awful, in my recollection) to some little exercises in PhaserJS.

The consistent feeling I had, though, was that the entire process was geared toward some end result(™). The process itself was cumbersome and frustrating, only buoyed by some future reward: "Eventually there will be a game here. One day."
There is undoubtedly value in practicing delayed gratification; not everything good happens immediately. But the differential between starting a game development project in the past and getting something "fun" (or at least impressive) was very large, and that amount of gap is where burnout happens.

The Indie Game Resurgence

It probably seems like indie games had their heyday in the early to mid 2010s. 2011 was a banner year, bringing us Minecraft, The Binding of Isaac, and Terraria. Fez was released in 2012. Shovel Knight came out in 2014, then Undertale and Ori and the Blind Forest the next year. 2016, while light on acclaimed indie releases did bring us the heavy-hitter Stardew Valley. 2017 closed out the "golden age of indie games" with two wildly popular crowd-pleasers with Hollow Knight and Cuphead.
What have we gotten since?
Among Us? Hades?
Great games, but a trickle of good indie games is not indicative of a healthy ecosystem where smaller, more experimental (or innovative) games can thrive.

But, maybe it's not about survival. Maybe it's about how much effort it takes to build a game. In those days, developing a game could actually make sense: the appetite for new indies was ravenous, so the financial upside was huge. The success (and eventual purchase) of a game like Minecraft showed that a single dev with enough development chops could actually make a fortune selling a game they built in their free time. But the effort it took was also astronomical. Minecraft was Java; I'll leave that there. Game engines at the time were obtuse (as far as I can tell) and inflexible. Even a game as massive as Stardew Valley ported their game to a new engine - in 2021 - because the old one was holding them back. An engine port is not something to be taken lightly, so we can assume that the older engines were cumbersome.
So, while the development was more difficult, the upside was worth it. In the late 2010s, though, the appetite was sated. Much has been written about the river of flaming garbage being pumped out by the likes of Steam Greenlight, but I think it's fair to say that consumers are more cautious than before, which means games need to deliver on promises, have some hook that makes them truly fun, and they need to be affordable (a definition of affordable is essentially impossible to summarize, but everyone has some definition given a set of circumstances).

I think that we are on the cusp of a new golden age of indie games.
Unity is just friendly enough that new developers can pick it up with a little bit of initial struggle. Even Unreal is getting more and more friendly to new-comers.
But the real thrust of new game development is relatively new engines like MonoGame, Construct, and Godot. Construct is a notable exception here, in that is has a subscription and a proprietary license. MonoGame and Godot, however, are open source and have extremely friendly licenses. Perhaps even more importantly, an engine like Godot is democratizing game development in an easy, consumable way.
Working your way through the Godot tutorial is just a matter of a few hours, zero dollars, and - maybe most importantly - it leaves you with foundational knowledge you can build on for your next step. You're not just copying and pasting code, you're understanding what it is that you're doing.

And this is why in the next 3-5 years, we're going to see a new glut of high-quality indie games. Not because of a massive shift in end-user appetite (although there is an argument to be made that nostalgia for days gone by is whetting a new appetite), but because modern tooling has brought the effort and investment in building a game back in line with what you can expect to get out of it.

I posted on Mastodon about my own experience with starting to learn Godot, and the basic takeaway is this: I have never felt like the learning process was the intrinsically fun part of game development. That the actual development part could be the fun part - thanks in large part to Godot - is a monumental shift.
I don't expect indie games to take over from the Calls of Duties and Defensive Structures in the Evening, but I do expect to see a steady stream of high-quality indie games over the next few years, and I - for one - am here for it.

]]>
https://log.rdl.ph/post/we-are-on-the-cusp-of-a-new-golden-age-of-indie-games.html1lsg6bdFNpD7faYawG0nb8Mon, 06 Mar 2023 21:31:00 GMT
<![CDATA[New Year, New Me? My Resolutions for 2023]]>

Better late than never

It's getting nearer to the end of January, so it seems like a good time to actually write down what I want to accomplish in 2023.

One of the key things that I typically incorporate into my "resolutions" - if you can call them that - is a reaction to things I didn't like from last year. So here's what I want to change about my life in 2023.

I resolve to:

Read more books

This one... uh... this one won't be that hard. Last year I read two books and I even had to cheat and say I finished the second one in 2022 (although let's be honest the rule is that the day isn't over until you go to sleep).

I've found that I feel guilty if I don't finish a book. I'm not sure where it comes from, but take it from me (I'm mostly talking to myself here): There are so many books. So many. Way more than you could ever read. And some books are bad! Actually lots of books are bad! It's possible that the majority of books are bad, actually. It's okay to not like a book and just... quit it.

I think it will also help if I give myself a license to read whatever I want. Last year, I told myself I would read one "fun" (maybe fiction) book and then a more academic book, and then back to fun, etc.
This sets me up to get stuck on a boring academic book, set it down, and never pick up another book for the rest of the year. After all, I didn't finish my veggies, so I can't have my dessert.
This is absolute bollocks and I'm giving myself - and you, reader! - permission to just read whatever. Read a good book, who gives a shit what it is as long as you like it?
Did you go to my bookshelf link up above? The first book I read this year was Jurassic Park. Banger book. Dinosaurs go hard. So many people got eaten. Read it in like two days. It was fun! It's okay to just have fun!

Finish more video games

Oh, I hear you: "but Thomas," you whinge, "playing more video games isn't a positive life change."
First of all, nobody said resolutions need to be positive changes.
More importantly, however, the evidence that playing video games has wide-reaching and substantial positive impact on logic, cognition, socialability, motor skills, academic performance, and on and on has been piling up for decades. It's actually a perfectly fine pastime.
Most importantly, my resolution is to finish more video games. Not play (necessarily), finish.

I find that my general approach to video games is either casual ("I have a few minutes"), creative ("I want to build something"), or completionist ("all of the collectibles in this game seems achievable!").
None of these approaches lend themselves to finishing high-quality, narrative games. Not that this necessarily falls into the high-quality or narrative categories, but I've been stuck at something like 90% completed on Doom Eternal because I missed a single collectible on one of the last levels and the idea of going back and replaying the whole level just so that I can get that one item makes me never want to touch it again.
I know! I put myself into this position! I'm not happy about it!
So the goal is to finish more games, which will take some planning:

  • Intentionally limit myself to shorter games.

    I've found that huge, long games tend to get boring anyway. More is - quite often - not better. Tell me a story with a great set of mechanics and maybe a little innovation in 15-25 hours and I'll be perfectly happy.

  • Allow myself to not experience every possible piece of content in a game.

    I suspect that my completionist streak comes from a very long time ago - a much more formative time - when I was a gamer, but also very broke. Buying a game was a much riskier venture, so I had to get everything possible out of it.
    I don't have the time or energy for that level of neuroticism any more, and I have quite a lot more money, too.
    Basically, take it down a notch.

Use my own game tracker exclusively

This is a bit of a sleeper resolution. See, I've been using Backloggd for a bit to track video games, but it's got some bits that are too complicated, and some odd parts, and some things that I think really could be a lot better.
So I started building my own.
It's actually going fine, thanks.
It's going fine, but slowly. So this resolution is to attempt to guilt myself into increasing the pace just ever so slightly. Can I get a version 1 in 2023? Sure, I can. Can I do it with time to spare? Well, yes, I could. I just have to get motivated.
It's hard to build complicated web apps when there's absolutely no promise of a pay-off (financial) or even niche popularity. In all likelihood, this app will be exclusively for me. All by myself.
But!
I resolve to use it exclusively by the end of this year, which means that it will be "v1 feature complete." And version 1 is a pretty good app, if I do say so myself. So ultimately, the resolution here is "get to v1 on the game tracking app." But that's a bit clichéd, if you ask me: "I'm going to work on my side project!"
Sure, pal.
Maybe this way I can feel like it's an accomplishment!


So there we are. That's what I want to try to do better in 2023. If I don't do it, that's okay, too!

]]>
https://log.rdl.ph/post/new-year-new-me-my-resolutions-for-2023.htmlqfEYPbxSzxCYXwwTDvLBTSat, 21 Jan 2023 06:42:00 GMT
<![CDATA[Game Tracker devlog #1]]>

A few months ago, I started working on a new web application.

A commit graph showing many commits in August and September of 2022, but only a few in October.
My commits were pretty strong - almost daily - in August and September. All on this new app.

As with most apps I start building, this new one is to scratch my own itch. I've been using video game tracking (and movie tracking) sites for a long time, but I've never found a video game tracker that felt like the tool I wanted to use.

When I started, I had all of the early-project drive, and I got a lot done (I have 51 cards in my "Done" column!). But then, I came to the first problem: a feature that would allow the user to write their own data back to the application store.

No, I didn't get stuck on content moderation, I got stuck on the ability to submit a star rating.

ENCAPSULATION!

To be clear: I didn't actually get stuck on making a star rating. What I got stuck on is the idea - which I believe in strongly - that UI components should not make network requests.

Of course, network requests have to happen somewhere, so my approach for a long time has been to abstract those type of things (and every other kind of "not UI" action, too) out to another place.
This is super beneficial, because I can generalize any behavior, and I can exercise an event-driven architecture. However, it does have the one downside that I got stuck on: If I want to build an event-driven system and keep UI components clean, how far should I take that?

After all, if it's events all the way up, then I could catch every single event in exactly one place and branch to triggering actual behaviors from there, right?
But, the problem (of course) is that those UI components need to react to events occurring. Obviously, this problem has been solved in the most over-engineered way possible with Flux libraries like Redux and Vuex. You publish an action or you use a getter, and you go through a whole store update cycle, and then that data flows back down into the component. You've heard the story.

I have written - at length - about Flux in the past. As noted in that post, there are good parts to Flux. Namely, the directory full of actions that define how the application behaves.

Where I get stuck, though, is when I have to put this into practice. I don't actually want to abstract everything away from the components. I'm also absolutely loathe to put "business logic" and network calls into a UI element.
Where I'm at - with an architecture like Fluxbus - feels like the best of both worlds: I'm boiling behaviors like data access, network calls, and more to a single event action that can be observed by any actor in the entire application. The component only issues those actions and they are handled elsewhere ("business logic", essentially). I'm not depending on a global object or magical variables just like how the VDOM libraries have done it - badly - for years.
But, I still don't like it.

So that's how I got stuck submitting a numeric star rating on an app I'm working on.

Ultimately - like my refreshed approach to blogging ("just do it and worry about cleaning it up later") - I'm going to just forge ahead and deal with a refactor later.

My concern is not that I may have to refactor, it's that I can't think of the next improvement on top of this to remove all of the annoying bits. I feel nearly paralyzed by imposter syndrome.


Links I've been pondering

]]>
https://log.rdl.ph/post/game-tracker-devlog-1.html4Q9HXvKVlnWlDETD1qVKcvFri, 18 Nov 2022 09:35:00 GMT
<![CDATA[A Good Code Review]]>

If you've worked in software development for any amount of time, you've probably encountered The Code Review. This process is typical in almost every development organization.

What is code review?

The short explanation of what "code review" means is this: Someone other than the author looks at code changes and makes suggestions before the changes go into the product.

Why is code review?

Having experienced code review at many different companies and while interacting with open source projects with many different reviewers, I have whittled down my idea of what a code review should be to a few simple rules.

Every single comment on a code review should address a failure of one of the following guidelines:

  1. Is this code effective in accomplishing the stated goal of the changes?
  2. Is this code obviously correct in execution (e.g. apparently defect-free)?
  3. Is this code free of major security or performance issues?

Every code review comment should clearly address some problem with one of the three items above. If a comment does not, it should be explicitly self-identified as both non-blocking and an "opinion," "thought," or "concern." Sometimes an "opinion" can manifest as a "suggestion," but these are explicitly non-blocking, and should be self-identified as such.

If a review does not address any of these issues, but also fails to self-identify as non-blocking opinions, consider that what the review does - in fact - is arbitrarily gatekeep software changes.

The only meaningful review is one that catches implementation issues ("You said this does X, but in reality it does Y, which is similar, but not quite right."), or security flaws ("This would allow a malicious user to expose our database structure."). Reviews that contain anything else are meaningless FUD that are used - most often - to enforce arbitrary adherance to actively harmful cult practices.

How is code review?

A convenient way to make sure your own code reviews are on target is to use a small table of "review metadata" to categorize them. You could include two items to start: "Type" and "Blocking".

Type could be one of six options (but your values could vary):

  • Operation: This doesn't seem to accomplish the stated goal.
  • Defect: There seems to be a logic problem.
  • Security: This causes a security problem.
  • Performance: This causes a noticeable performance problem.
  • Thought: This made me think of something.
  • Suggestion: Here's another way to do this.

The blocking value of your metadata would depend heavily on which type you use, but it's safe to say "Thought" or "Suggestion" types would be non-blocking, while the rest would almost certainly be blocking issues.

Here's an example review metadata table:

Review Comment Metadata
Type Performance
Blocking Yes

That table might follow a comment like: "When I ran this in my browser, all of my fans spun up to their max speed. I checked the memory profile and this change consumes 800MB of additional memory when the page loads."

Suffice to say: If a review comment can't fall into any of these types, it probably doesn't have much value to the code author. Likewise: if none of a review's comments are blocking issues as defined above (Operation, Defect, Security, Performance), then the review should automatically be approved!

]]>
https://log.rdl.ph/post/a-good-code-review.html222nlSba5EwEywgIn2hM6JMon, 14 Nov 2022 14:45:00 GMT
<![CDATA[Mastodon for People Who Don't Like Mastodon]]>

Have you considered leaving Twitter, heard about Mastodon, checked it out, and didn't like it?

I'm sorry to hear that, but maybe this quick, no-BS explainer will help you get acclimated to something that isn't owned by a billionaire.

I don't like servers or instances or whatever. Where is mastodon.com?

It's mastodon.social. Yes, there are lots of other ones, but if you just want the "twitter.com of Mastodon," just sign up on mastodon.social.

Usernames are annoying

Okay.

Think of them like email addresses. Nobody could email you at @username unless there was a single, monolithic email service that controlled all email, so we use username@[your email service]. Maybe you use username@gmail.com, but you could also use username@hotmail.com and it still works because it's a standardized email protocol. Mastodon works the same way, so you would use @username@[your mastodon service]. In your case: @username@mastodon.social.

Nobody I like is there ☹️

Maybe, but also maybe not.

Anyway, Twitter was also a weird place in the early days with only the earliest adopters there, too. Maybe you can be a tech leader this time!?

But, like, where are all the interesting conversations?

Ah, well, they're happening, you just have to find the individuals having those conversations and follow them.

See, the biggest difference between Twitter and Mastodon is that there's no algorithm on Mastodon. There's no computer trying to juice engagement by showing you the hottest takes sure to elicit the most angry responses. There's no engagement boost when you favorite a post, or when you "ratio" a post. In fact, there's no incentive at all to posting or engaging except the real human part of that. Like something? Favorite it. Only you and the author will know. Isn't that a pleasant human interaction?

I want to retweet.

Sure.

They're called "boosts" on Mastodon, and you can do them as much as you want. The only difference is there's no Quote Tweeting. Quote Tweeting is a great way to dunk on somebody, but a terrible way to have a real conversation, so you can't do it. You can reply to them, though. It's much more civil!

I want to heart tweets.

Sure.

You can favorite (it's a star, like how Twitter used to be) posts. They're called "posts" on Mastodon (or "toots", but you can call them posts if "toots" sounds silly. Nobody cares. It was named by a person who didn't speak English as their first language, and they didn't know "toot" had a connotation other than the elephant sound in English.)

"toot" is cringe.

Saying things are cringe is cheugy.

Anyway, why is the sound an elephant makes (toot!) worse than the sound a bird makes (tweet!)? I like Proboscideans more.

I like apps. I want to use an app.

Sure. Define "app."

Haha, just kidding. I know what you mean.

One of the most popular Mastodon apps is Tusky (Android only). If you're on iOS, you can use the official app. For either platform, however, you might as well just install the PWA. Mastodon is a full-featured web application that you can install directly from the web page (use your phone's browser's "install" menu option). It looks and feels just like a native app, but it's always up to date and live just like a web site.

Why is following some people hard?

Basically, for the same reason that you have to type the full email address of a contact.

A good way to think of following people on Mastodon is like subscribing to a newsletter: Sure, you can go to someone's page and see the button to sign up for their newsletter, but you still have to type in your email address for their newsletter to reach you. You both have to be aware of each other for the connection to work.

There's a quick and easy work-around if you ever see that irritating window asking you to type in your username, though.

  1. When you're on a person's profile page, copy their username. Again, that's the bit that looks like @[the thing they picked]@[the server they're using]. Mine is @rockerest@mastodon.social.
  2. Go back to your own account.
  3. Paste the username of the person you copied earlier into the search field and submit the search.
    • Mobile: Click the "Explore" tab (it's the one that looks like a number sign or a hashtag).
    • Official App: Tap the search magnifying glass.
    • Desktop: The search input is at the top left.
  4. Press the "Follow" icon next to their account.

I can pay to edit my tweets and get fewer ads on Twitter.

Bummer!

All of that stuff is free on Mastodon! The latest version of Mastodon (which mastodon.social is using) has editing built in, and since there's no company trying to monetize your attention, there aren't any ads at all.

But if I pay on Twitter they give me a checkmark.

Well that seems like a great way to devalue identity verification - which should be a public service.

Anyway, Mastodon has something a little similar, but you self-certify by modifying a website that you control to point back to your Mastodon account. It's pretty easy to do by just adding rel="me" on a link to your Mastodon account.

I don't have a website.

Another bummer!

As you move out of centralized services like Twitter to "less centralized" services (like the federated services of Mastodon), it might be a good idea to consider who owns your data!

Before - on Twitter - Twitter owned all of your data. Leaving Twitter is hard because it's there - and absolutely nowhere else. In environments like Mastodon, you control your data, which means it's especially helpful to have your own website (it can be free!).

I don't want to get all starry-eyed in this "no-BS" guide, but if you're considering leaving a centralized service like Twitter, you might have that spark of individuality, too! A personal website is a great way to control who owns your voice and your data, and as a side effect, you can make the link on your Mastodon profile highlight green with a checkmark if you self-verify with a link back 😀.

What the heck; there are too many timelines!

I feel you.

If you're not following anyone, the best place to get started is your "Local Timeline." That's everybody else on your server. It's like all the tweets on Twitter.com.
Once you're following people, "Home" is your feed of posts from everybody you're following, in chronological order (remember, there's no profit-driven algorithm here, just an honest-to-goodness timeline).

If you really want to go crazy, the "Federated Timeline" is... well it's every post from every person on every server (that your server knows about).
This is like if hundreds of people were each running their own Twitter.com and each of those Twitter.coms each had thousands and thousands of users.

I don't find the Federated timeline all that useful because it's a massive firehose of posts, and - because Mastodon is used by people all over the world - it's almost never in a language I speak! (You can filter posts to only languages you know in your preferences, but most people don't seem to change their post language from the default "EN," so I still get many posts I can't read.

Stick to "Home" and "Local Timeline," they'll serve you best.

Why are lots of posts on Mastodon hidden behind a "Show More" button?

Some parts of Mastodon culture (and there are lots of them, so don't assume this is universal), feel that using the "Content Warning" ("CW") button when creating a post creates a super safe space for everyone on the platform. They put a general "subject" of the post, and then put the actual content inside the "hidden" content. If you're talking about very hard subjects, this is definitely kind to people who may not want to read about those things. It's also convenient when you're just talking about random topics like "politics" so people can choose whether they want to even engage with that.

However, using Content Warnings isn't a requirement. If you don't like it, don't feel obligated to use it. Some people might say that Mastodon culture always uses content warnings for any potentially sticky subject, but the bottom line is that there is no single "Mastodon culture," so do what feels right to you (but follow the rules of the server you're on.)


Do you still have concerns or questions about Mastodon?

You can contact me directly at @rockerest@mastodon.social and I'll try to add it to this document.

Happy tooting!

]]>
https://log.rdl.ph/post/mastodon-for-people-who-don-t-like-mastodon.htmlTV1e858aVUXmyGRBuFMCTSun, 13 Nov 2022 16:40:00 GMT
<![CDATA[Running Puppeteer on WSL2]]>

I have recently had reason to subset fonts manually and I found a tool called glyphhanger which subsets arbitrary fonts to arbitrary codepoints.

As with many of the cool, useful tools on the internet these days, it's something that Zach Leatherman built. Thanks Zach!

The thing - though - is that glyphhanger requires Puppeteer.

Installing Puppeteer in a non-graphical environment

I use Windows for all my day-to-day programming outside work, but I use Linux - via WSL2 - to actually make it a usable environment.

Puppeteer doesn't work out of the box in a "headless" Linux environment.

Thankfully, I found someone who had the same problem. They went through a lot more steps, so I thought I'd boil it down to just 2:

  1. Install chromium with sudo apt install -y chromium-browser
  2. Install one missing library dependency with sudo apt install -y libatk-adaptor

Enjoy GUI-less Chrome(-ium)

]]>
https://log.rdl.ph/post/running-puppeteer-on-wsl2.htmlUgQ4Z0eiEg23TFZ8NrFPbWed, 01 Jun 2022 02:15:00 GMT
<![CDATA[Here's How 2021 Is Going So Far]]>
[Compiled from a Twitter thread]

The amount of Stockholm Syndrome in web development and overall engineering is mind-boggling and overwhelming.

I don't even know where to start with people who love how hard frontend development is today.

Frontend is undeniably complex (and extremely so). However, it does not have to be hard, or compiled, or heavy (in runtime and bytes). These are invented problems.

"Imagine if literally everything wasn't terrible." doesn't work when the recipient loves how it is right now.

I'm so exhausted. Frontend development has devolved into a race to the most complicated solutions to already-solved problems and propped up by thoughtleadership that excommunicates anyone who dare speak out.

You fall in line or you are labelled "obsolete."

I think if this is how frontend web development is going now and into the future, I'd rather be obsolete. I don't want to be part of this travesty when we are ultimately mocked for it.

]]>
https://log.rdl.ph/post/here-s-how-2021-is-going-so-far.html4Cvr17gVlALXmWSF960CF4Thu, 11 Mar 2021 19:30:00 GMT
<![CDATA[Capitalism Has Almost Finished Destroying Software]]>

Just a little farther to go before we get a thinkpiece about how the world is "post-software" and we need something else [that we can suck boatloads of money out of before we discard its withered husk].

Myopia Is Caused By Capitalism

A bunch of companies made a lot of money in the last two decades, and they did it - primarily - by being first to market. The Silicon Valley, startup, "eat-the-world" culture created by that mindset has filtered out into every company that writes software in any competitive market. Naturally, this exclusively rewards myopia. There is no future value, there's only what you can ship out to customers tomorrow.

Of course the irony here is that the billionaire VC leeches pushing this mindset are making money on the future - but not too far into the future that everything crashes down under its own weight. Just far enough they can sell their stake and get rich.

Software Myopia Causes All The Rest of Our Problems

Education and production will follow the financial incentive. If every company is looking for the most rapid returns, they will choose the tooling that helps them get there. That's Virtual DOM tools like React - for now at least. The tooling is obviously and objectively inferior to a lot of other options, but its one redeeming characteristic is that you can get things done insanely fast. Maintainability be damned.

If every company is looking for React developers, that's what bootcamps will train, because that's what students want. And of course, the high hire-rate after graduation is a great statistic, too.

If the entire field around you looks like it's all framework development, there's no financial incentive to actually learn your programming language. After all, no one is hiring JavaScript developers, they're hiring React developers (and Vue developers, and...).


The technical debt accrued along this set of predefined rails (another tool in this exact vein, but it's backend) is absolutely massive. Any product of any size eventually collapses under its own weight. I've seen it happen - always in slow motion - at two separate companies. No surprise both are VC-funded.

This hell-cycle and the - obviously apparent - outcome are why people hate JavaScript and constantly build tools to replace it. It's also why companies hire more and more developers to throw at their software (who - of course - do the exact same thing, compounding the problem).

It's a Human Problem

The solution is undeniably simple: Value long-term quality over short-term profit.

I appreciate how utterly insane I sound, which is - of course - the problem. We can't change the financial incentives without changing the structure of our country's financial system.

Now, we could also fix this from the bottom up, too. All it would take would be every developer refusing to reach for the easy, fast solution and demanding to do things the right way.

By the nature of the situation in which we find ourselves, this is equally insane. Developers have constructed stories they tell about themselves: as capable, valuable individuals making a difference... "changing the world." Admitting that the entire basis for that identity was a lie is personally devastating.

"I have sacrificed both my craft and the product of my effort at the altar of faster deployments so that somebody above me can make a buck a little faster. I am essentially a machine that a capitalist can extract dollars from and then discard." is a real tough pill to swallow.

]]>
https://log.rdl.ph/post/capitalism-has-almost-finished-destroying-software.html76zmcVWzjSxF0mB0VwIwBhFri, 14 Aug 2020 14:27:00 GMT
<![CDATA[Fluxbus 2: 2 Flux 2 Furious]]>

This again.

I've talked about Fluxbus in the past. If you haven't read about Fluxbus, I (not very) humbly suggest it's the application architecture you need but have never heard of.

Astute readers might notice that I bemoan one of the worst things about Flux:

Every single listener is called for every single change

[…]

Since the subscription pattern in Redux is store.subscribe( someCallback ), there is no way to filter how many functions are run when the state updates.
Imagine a moderately complex application that has a state value like { messages: { loading: true } } and a few hundred subscribers.
How many of those subscribers care that the messages stop loading at some point? Probably 1. Wherever your chat module is, it's probably the only place that cares when the messages stop loading and it can hide a spinner.
But in this subscription paradigm, all of your few hundred subscribers are called, and the logic is checked to determine if they need to update. The inefficiency is mind-boggling.

Again, Flux doesn't quite have the exact same problem: not only because of the fact that you can have multiple stores for specific intents, but also because it doesn't prescribe any particular subscription pattern.
You could implement a pre-call filter that allows subscribers to only be called for particular actions that they care about, but this is so far off the beaten path it's never been seen by human eyes before.

And then go on to suggest the alternative - Fluxbus - which has an implementation like this:


function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

This is just wrapping the native Rx methods with a simple helper (in the case of publish) or our filter magic (in the case of subscribe). The subscriber takes a map of functions keyed on a message name. If that message name is seen, that subscriber is triggered.

Ladies and gentlefolk, this is what's called a Minimum Viable Product.

Explain yourself.

The problem here is that in a worst case scenario, this is exactly as bad as the Flux pattern where it calls every subscriber for every change.
Granted, that's the worst case behavior, compared to Flux's intended average behavior, but it's still not great.

Let me explain what I mean by worst case.
Let's say you have 1000 bus messages that you'd like to subscribe to. In a normal application, you might subscribe to 5 in one component, and 10 in another, and so on. This might result in close to 200 different subscribers.
As a refresher, you might subscribe like this:

subscribe( {
	"SOMETHING_I_CARE_ABOUT": ( message ) => { ... },
	"SOMETHING_ELSE": ( message ) => { ... },
	"ANOTHER_THING": ( message ) => { ... },
	"ONE_MORE_THING": ( message ) => { ... },
} );

In practice, you've added one subscriber, but it will hand off control to any of those four handlers when the bus recieves any of those four events.
Pretty decent, but the outer call is still dumb. There's no filtering at the top level, so every time you call subscribe, that's another outer subscriber that will get called on every message through the bus.

Again, I want to reiterate: this is significantly better than binding into every component and calling every handler on any message (or any data change, in Flux terms). Once a function like this is called, the Message Bus will determine that none of the message names match, and it won't call the handlers.

At this point, I'm sure you can imagine the worst case scenario. I'll rewrite the above:

subscribe( {
	"SOMETHING_I_CARE_ABOUT": ( message ) => { ... }
} );

subscribe( {
	"SOMETHING_ELSE": ( message ) => { ... }
} );

subscribe( {
	"ANOTHER_THING": ( message ) => { ... }
} );

subscribe( {
	"ONE_MORE_THING": ( message ) => { ... }
} );

Now we're close to Flux territory. While we're not running expressions inside component bindings to check whether the update is relevant to us or not, we're still calling every outer subscriber for every single message sent through the bus.
Reiterating one last time: even the worst case scenario here avoids unnecessary component cycles by never calling into the handlers if the message doesn't match.
Fluxbus with both hands tied behind its back still beats Flux on a normal day. But it's not ideal.

Fix it.

The key here is that Fluxbus was just the absolute thinnest of wrappers around an RxJS Subject. It's barely even there. Here's the code again:

function publish( message ){
	return bus.next( message );
}

function subscribe( map ){
	// Returns an unsubscriber
	return bus.subscribe( ( message ) => {
		if( map[ message.name ] ){
			map[ message.name ]( message );
		}
	} );
}

This is the absolute minimum extra code possible to implement subscriptions-by-message-name.

We can do a lot better by being just a little more clever.

The basic premise is that a Fluxbus instance can track handlers separately from handling inbound messages. So when a consumer calls subscribe, the internals of Fluxbus turn the message-name-to-handler mapping into a message-name-to-identifier mapping. Then it pushes the handler into a dictionary of all handlers and it's done.
Of course, that's not quite the whole picture - you need to reasonably handle unsubscriptions, too, and there are other little optimizations you can do along the way.
For the sake of our brains here, I'll present the MVP for this - sans abstractions and optimizations.

It gives us the code or it gets the hose again.

var bus = new Subject();
var handlers = {};
var listeners = {};

bus.subscribe( ( message ) => {
	( listeners[ message.name ] || [] )
		.map( ( id ) => handlers[ id ] )
		.forEach( ( handler ) => {
			handler( message );
		} );
} );

export function publish( message ){
	bus.next( message );
}

export function subscribe( messageMap ){
	var names = Object.keys( messageMap );
	var callbacks = Object.values( messageMap );
	var idMap = [];

	callbacks.forEach( ( cb, i ) => {
		let name = names[ i ];
		let id = uuidv4();

		handlers[ id ] = cb;

		if( listeners[ name ] ){
			listeners[ name ].push( id );
		}
		else{
			listeners[ name ] = [ id ];
		}

		idMap.push( { name, id } );
	} );

	return () => {
		idMap.forEach( ( { name, id } ) => {
			delete handlers[ id ];

			if( listeners[ name ] ){
				listeners[ name ] = listeners[ name ].filter( ( listener ) => listener != id );
			}

			if( listeners[ name ].length == 0 ){
				delete listeners[ name ];
			}
		} );
	};
}

My eyes.

You said you'd give me the hose if I didn't give you the code.

What's happening here?

Let's break it down.

var bus = new Subject();
var handlers = {};
var listeners = {};

bus.subscribe( ( message ) => {
	( listeners[ message.name ] || [] )
		.map( ( id ) => handlers[ id ] )
		.forEach( ( handler ) => {
			handler( message );
		} );
} );

In the first section, we set up an RxJS Subject as our bus, just like before. We also define two variables that are going to cause all of this to be "singletonish" which is a very technical term for "singleton-y" things. It's a closure, basically.
Then, we immediately subscribe to the bus. This is the only bus subscriber. It will determine which handlers to trigger when a message comes in.
Roughly, this subscriber goes something like this:

  1. Give me all the IDs of listeners for this message name
  2. Convert all those IDs into their real handler functions
  3. Call each handler function with the message
export function publish( message ){
	bus.next( message );
}

Then we have our normal publish function.

boring 🥱

export function subscribe( messageMap ){
	var names = Object.keys( messageMap );
	var callbacks = Object.values( messageMap );
	var idMap = [];

	...
}

Then, we get to the real magic: subscribe.
subscribe still accepts a dictionary that maps a message name to a handler function to be called. But now, we immediately split that map into the name keys we're listening for and the handler values to be called. It's going to be very convenient later to have these as discrete arrays. We need a way to keep track of the mappings we're going to create here, so we also initialize an empty array to store mappings.

export function subscribe( messageMap ){
	...

	callbacks.forEach( ( cb, i ) => {
		let name = names[ i ];
		let id = uuidv4();

		handlers[ id ] = cb;

		if( listeners[ name ] ){
			listeners[ name ].push( id );
		}
		else{
			listeners[ name ] = [ id ];
		}

		idMap.push( { name, id } );
	} );

	...
}

The next chunk begins by iterating over each of our handler functions and grabbing the message name associated with it and generating a new unique ID for that handler.

Say there fella, couldn't this be
Object
	.entries( messageMap )
	.forEach( ( [ name, cb ] ) => { ... } );
to avoid two extra variables and two iterations over the object to extract them?
……………………………… stop yelling at me i'm the internet's sweetie pie

Once we have a unique ID we immediately store our handler in a dictionary associated to it.

Then, we check our global listeners object for the message name. If we're already listening to that message name, we push our handler ID into the list. If we're not, we just create a new list.

To keep track of all our mappings in this subscriber, we push an object into the list of ids that includes both the message name and the handler ID.

export function subscribe( messageMap ){
	...

	return () => {
		idMap.forEach( ( { name, id } ) => {
			delete handlers[ id ];

			if( listeners[ name ] ){
				listeners[ name ] = listeners[ name ].filter( ( listener ) => listener != id );
			}

			if( listeners[ name ].length == 0 ){
				delete listeners[ name ];
			}
		} );
	}
};

Finally, we return an unsubscribe function. When a consumer calls subscribe with a map of message names and their handlers, they will expect to receive an unsubscriber that unsubscribes all of their handlers at once just like they passed in a single dictionary.

So we take our handy array of { "name": message.name, "id": handlerId } mappings and we loop over the whole thing.

We delete the handler associated with the ID we're iterating over. Then, we remove the listener ID from the list of all the listeners for that message name.
As a last cleanup check, if that message name no longer has any associated listeners, we just delete it from the dictionary of message names that we check in the bus subscriber.

Do math.

If we let n be the total calls to the subscribe and x be the total subscriptions for a given message (like "SOME_EVENT": ( message ) => { ... }), then...

The previous version of Fluxbus was O(n + x) for every published message.
That is: if your application calls subscribe 100 times, and in just one, single instance of those calls you subscribe to SOME_EVENT, there will be 101 function calls when a single SOME_EVENT is published into the bus.

This new version of Fluxbus is O(x + 1) for every published message.
That is: if your application calls subscribe 100 times, and in just one, single instance of those calls you subscribe to SOME_EVENT, there will be 2 function calls when a single SOME_EVENT is published into the bus.

Done?

Of course not.

The immediately obvious next step is improving this code. You can see such an improvement in the library I built to reduce my own application boilerplate. I've very cleverly named the implementation MessageBus.

Of course then there's syntactical stuff like what you were yelling at me about up above.
Code is never finished.

What do you think about Fluxbus 2?

]]>
https://log.rdl.ph/post/fluxbus-2-2-flux-2-furious.html4buTHtlgjgJbepJkBhDZCyFri, 31 Jul 2020 21:30:00 GMT
<![CDATA[Remote Work Q&A | Jamstack Denver]]>

Thanks!

Thank you to all the folks who showed up to talk about remote work and ask us questions on !

Wrap-Up

At previous remote work events, I didn't have a single place for after-the-fact answers, somewhere to collect all the resources I might have mentioned, and more.
I want to make sure there is a single place to collect everything, so I'm collecting everything here.

Video

Before the Q&A session, Mike Duke gave a great talk about his experience shipping his first production site, so for more insight into being a dev, I highly recommend watching his talk, too.
The majority of the event was recorded, so you can find that video below.

Resources

I find the resources below helpful for folks interested in learning more about remote work.

Other Questions

Do you have questions for me?
Send me an email!

]]>
https://log.rdl.ph/post/remote-work-q-a-jamstack-denver.html5opDfjiYKdELD94Oljlxi3Sat, 02 May 2020 20:09:00 GMT
<![CDATA[Remote Work AMA | DVLP DNVR]]>

Thanks!

I'm putting the thanks section first, because without the great organizers and questions from the attendees, this AMA event could have been a real bust!

So: thank you to the DVLP DNVR team and all the folks who showed up to talk about remote work and ask me questions on !

I think I had some pretty good answers to some of the questions, and some maybe okay(ish) answers to others!

Wrap-Up

During and after the event, I realized I had made a mistake by not having resources about remote work readily available somewhere for everyone to see!

After the event, I got some more questions via Slack (Denver Devs) and my email (rdl.ph).
I want to make sure there's a single place to collect everything, so I'm collecting everything here.

Video

The majority of the event was recorded, so you can find that video below.
The recording begins in the middle of a question asking what Jamstack is and if it's just for sites without servers (it was a nuanced question, so that's not exactly a good paraphrase!).

Resources

I mentioned a few links and other resources, so I've collected those below.

Other Questions

Some folks have asked me more questions after the fact, so I thought I'd collect those questions and my answers below.

  1. Advice for someone looking for a remote position as their first job?
  2. What are you looking for when you're evaluating remote candidates?
  3. How do I grow as a developer?
  4. What do you think about Svelte?

From Tobie Tsuzuki:

What advice do you have looking for remote jobs as your first technical role?

That's a tough one!
To be clear, my first job wasn't remote, so I can't give you any insight from that side of the picture.
However, I've hired people (remote & otherwise) and I've gotten jobs remote after my first, so here's what I think:

  1. First of all, be excited about the company.

    I admit that's something I can say from a place of privilege: if you don't have a job, anything is better than nothing - excitement aside!

    But it really helps you stand out when you have lots of experience with the company, like their product, use their product, have ideas of how to improve their product, etc.
    "Passion" is an overused and under-defined search criteria, but if I were to use "passion" as a hiring criteria, this is what I want.
    Do you want to work for this company on our product, or are you excited about something else?
    Being excited about the company will make you more successful no matter what the tech is, whether or not it's remote, etc.

  2. Practice/exercise and demonstrate the skills required for remote work.

    The skills for remote aren't really all that different than the skills for non-remote work, but often during hiring the onus is on you - the candidate - to show that you can exercise those skills.

    Sadly this isn't as often the case when hiring for "local" employees, but it should be.

    Those skills are (not exclusively!):

    • clear written communication,
    • excellent time management,
    • excellent self-motivation,
    • low ceiling for getting stuck (e.g. if you get stuck you immediately seek help),
    • the ability to self-direct (e.g. if you encounter a problem or lack of things to do you know how to find solutions or create your own work)

    Basically, if you can think of a skill you should have as a professional engineer, being able to demonstrate those skills (ideally with evidence!) is crucial to laying an initial foundation of trust for remote work.

  3. If you can, frame as much as possible as how remote work benefits the company!

    It's fine to say "remote work is better for me because my sleep schedule doesn't fit the average work time period."

    However, some companies might interpret that as: "I'm going to slack off all day and party all night."

    A better phrasing might be something like: "I think remote work is a great way to run a business because it means I can work in all the time I'm most productive throughout the day!"
    Now it's phrased such that the business sees how they're benefitting from you working remote!


And a followup:

When you are looking for 'passion' do you expect the candidate to have made projects or can demonstrate a working understanding of the product, or is a well crafted cover letter enough to get your foot in the door.

Yes!

A well-crafted cover letter is enough to get your foot in the door, but only that.

I would say that for any job interview, remote or not, you should know as much as you possibly can about the product.
I've used GitLab for many years, but before I interviewed, I read all of their product marketing, dove into the handbook and read up on policies and engineering structure, read guidance docs for the way they run reviews and code styleguides, etc.

I think if you want a job that necessarily means you should have at least a working understanding of their product, but probably more than that.

As for projects, that's kind of a tough one.

For someone coming to me for their first technical role, probably looking for a junior level position, I'd be less interested in their projects.
I know those projects probably wouldn't apply directly to my company in all but a small handful of cases, so I wouldn't be judging too harshly on those projects.
I would look for the following, however:

  • Has the candidate created projects outside of their school/bootcamp/training course?

    We all had to do schoolwork; did the candidate want to do more?

  • Has the candidate demonstrated proficiency in what they're applying for?

    Let's say you're applying for a front end position.
    If all of your projects are Python and Rails, I'm going to raise an eyebrow and wonder if you're going to succeed.
    I'll still give you a shot!
    But, if you can't show me in the interview that you know JavaScript pretty well, I'll probably take away two things:

    1. you're not a good fit and
    2. you don't know the difference between front end and back end!
  • Does the candidate have any specific things they especially like?

    As a junior / first time tech role, it's totally fine to not know specifics about where you want your career to go.
    But, if you can say: "I love the nuts and bolts of reactive data flow, and I've recently gotten really interested in Observable streams" that shows me that you've explored the depths of your field (front end in this case, it's just an example).

    Critical thought and "digging in" to tech is important to advancing, so showing those things early means I'm more likely to take a risk on you.


Following up again:

I really like the idea of digging into tech. What's your approach for furthering your understanding of certain techs?
So far since I am more junior and I don't know where I am headed, I have just been trying to learn any new thing that seems interesting, but after trying to learn multiple frontend frameworks it seems I have no deep rooted knowledge of any of them.

☝🏻 This is absolutely the Achille's Heel of front-end frameworks.
They abstract so much away and often invent their own syntax (thus requiring compilers and transpilers) that it often doesn't make sense for new learners to understand what's actually happening under the hood!

I would say the best way to start digging in is identify something that you think is important or cool, and then start searching for ways to write your own of that!

I know it sounds crazy, but it's the best way to really get into the fundamentals.

So here's an example.
Let's say you use React all the time.
You have learned or self-identified that the way React renders new things to the screen is really cool.
It's just so convenient for you as a developer to spit out the same HTML from your render function and then React magically updates the DOM for you... so good!

Okay! But how does it work!?
Build one yourself!
You don't need all of React, just build a static HTML page with some JavaScript included, and every time you call a function with a data object, the JavaScript should update the real page with the correct new HTML.

Should be moderately simple, right?
So it sounds like you're going to need to learn what React is doing (Virtual DOM <-> Real DOM diffing), which means you'll need to learn about virtual DOMs, which means you'll need to learn about document fragments in memory.
Then you'll need to learn how React chooses what to update (what counts as having changed?), and in fact how it maintains hooks into the DOM so it can update appropriately.

Start piecing these things together.
Your first pass can just be the worst thing ever, it's fine.
As you learn more and more about this really specific topic, it'll improve.

Most importantly, as you learn more, you'll identify either that this still intrigues you, or that it doesn't.
If it doesn't, you can start over, pick something new!

Maybe.... How do CSS transitions and transforms work?
Is CSS an animation library? Surely not? ..... hmm something to research.

Finally, I find the MDN Web API page incredibly illuminating. It shows me all the things I don't know.
I would recommend you take a look at the Interfaces section and see if anything jumps out at you.
I think bare minimum learning would be to understand all of the things that start with HTML and CSS.

I look at that list and I really want to dig into, for example, Speech* and Sensor* stuff.
Any of the RTC* (Real Time Communication) stuff also looks really cool.


Then Tobie asked a question about another meetup I run - Jamstack Denver:

I saw the JAMStack meetup and thought Svelte looks pretty smooth but not sure if will gain any traction what were your impressions of it?

Well... HMM 😅

I think it's a cool idea.
PROS:

  • I really like that it outputs raw JS (and a small amount of it) that runs in the browser.
    It doesn't ship Svelte to the browser, it just builds bare little functions that do the right thing.
    That's cool.
  • I like that - for the most part - it's JUST JS, HTML, and CSS.

CONS:

  • I've really, really burned out after 7+ years of compilers and transpilers and package management and plugins and configs.
    I don't like that Svelte literally cannot run in a web browser unless you compile it (or JIT compile it, like all the other frameworks)
  • Part of the reason for that is: I really dislike that Svelte has hijacked a valid piece of JS (the label) to indicate data binding (e.g. $: [some expression])

Personal stuff:

  • I used Ractive.js in the past and I loved it at the time!

    The guy who made Svelte (Rich Harris) also made Ractive.js, and Svelte feels like an incremental step forward - he essentially moved Ractive.js into a JS compiler instead of a frontend framework.

  • I stopped using Ractive.js because every frontend framework has been made obsolete by hyperHTML-style DOM libraries (like lit-html) and the platform-native component model baked into web browsers.
  • REDACTED 😅
    Tobie asked me what I thought, so I gave them my personal opinions!

I don't really think it's going to catch on.
Sadly, I think VDOM-based tools established themselves early, and it's hard to pry people away from that.

That said, VDOM is embarrassingly inefficient, so I hope that tools like lit-html and Svelte (not itself hyperHTML-like, but it's better than VDOM) eventually win out.

]]>
https://log.rdl.ph/post/remote-work-ama-dvlp-dnvr.html4nlVmqmRAKSpDHUETORmQ9Fri, 03 Apr 2020 23:53:00 GMT
<![CDATA[The Secret Feed (Warning: Boring)]]>
You're viewing a secret post! Read more about RSS Club.

You've Been Inducted

Since I now have a secret feed just for RSS subscribers (welcome!), I figured I should put something in it.
If you're not sure what I'm talking about - well, first of all, how did you get here!? More importantly, check out RSS Club.

RSS? 🤢🤮

To be clear, I don't feel that way about RSS, but I imagine some people do.
So why RSS?

I think social media is an unmitigated negative drain on society. Yes, there are bright spots, and for certain underrepresented groups of people, it can be a positive. But, in terms of the larger societal impact that social media has had, the negatives are... well... unmitigated.

That's not even to speak of the centralization and control that companies exert over content. Even ostensibly free content platforms like Medium began paywalling free content written on their platform for free by unpaid individuals who just wanted a place to share their thoughts and ideas.

Ultimately my point is that the less control we have over our content, the more likely it is to be taken away from us.

I know that running an RSS feed will not be for everyone. Maybe it's even too late to bring RSS back from the (almost) dead. My Inoreader subscriptions say otherwise, but perhaps it's a faux glut of content and it's going the way of the dodo. In any case, this web - one where the content is available on my terms, in a web browser for free, and in an RSS feed, delivered to readers for free, without restrictions - is the web I want to use.

If you want to join those of us who want a freer, simpler web here's how to participate (although I should note that - because this web is freer and simpler - I literally cannot make you do anything you don't want to do):

  1. Don’t Talk About RSS Club
  2. Don’t Share on Social Media
  3. Provide Value

Don’t talk about it. Let people find it. Make it worthwhile.
These rules are completely unenforceable, and that's the way I like it.

How's It Done!?

Like Dave, I'll provide the few details I have about how I made this work.

I use Contentful to store my content and a custom front end to generate the posts, so it was initially as easy as adding a field to my post types.

The new RSS Exclusive field showing a Y/N indicator and 'boolean' type
Pretty easy to set up. Only concern would be maxing out the number of fields for a content type, but the free account on contentful has fairly generous limits.

Once the field was set up, it's now just a matter of telling Contentful that this piece of content is RSS Exclusive.

The RSS Exclusive field in use on a piece of content, showing a set of radio buttons - 'yes' and 'no' and a 'clear' link. 'yes' is selected.
This post is marked as RSS Exclusive

At that point, the content is fetched from the Contentful API with the field included, so I just have to remove it from the main index.

import { sortReverseChronologically, isRssExclusive } from "../../common/Post.js";

function getPosts( promotion = "production" ){
	return sortReverseChronologically( [
		...getLongForPromotion( promotion ),
		...getShortForPromotion( promotion ),
		...getImageShortForPromotion( promotion )
	] ).filter( ( post ) => !isRssExclusive( post ) );
}

// common/Post.js
export function isRssExclusive( post ){
	return post.fields.rssExclusive || false;
}

Then, I just add a little header to the posts so the body itself has some descriptive content that I don't need to copy/paste to every post. I'll just show one here since all the post templates use the same code.

${post.fields.rssExclusive ? html`
		<code>
			<x-icon icon="rss"></x-icon> You're viewing a secret post! <a href="/rss-club">Read more about RSS Club</a>.
		</code>
		<br />
		` : ""}
	${unsafeHTML( post.fields.content )}

The same header is placed in front of the RSS item so it shows up right at the beginning of the RSS content, too.

Of course, I don't want to create another silo and break the web, so all of these posts just have regular URLs. Maybe that means this isn't really a truly secret RSS Club, but URLs are more important than our special club; that's sort of the whole point of this.

Welcome to RSS Club

So there it is: my first RSS Exclusive post.

Welcome to a simpler, freer web!

]]>
https://log.rdl.ph/post/the-secret-feed-warning-boring-.html73hI7P352hs34dosJ5vUBNThu, 02 Apr 2020 19:05:00 GMT