The big challenge with the approach not touched on in the post is version skew. During a deploy you'll have some new clients talk to old servers and some old clients talk to new servers. The ViewModel is a minimal representation of the data and you can constrain it with backwards compatibility guarantees (ex. Protos or Thrift), while the UI component JSON and their associated JS must be compatible with the running client.
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
mattbessey 5 hours ago [-]
This was a really compelling article Dan, and I say that as a long time l advocate of "traditional" server side rendering like Rails of old.
I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction
> Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
hcarvalhoalves 45 minutes ago [-]
> REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
modal-soul 1 hours ago [-]
I like this article a lot more than the previous one; not because of length.
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
acemarke 57 minutes ago [-]
Sounds like another one of Dan's talks, "React from Another Dimension", where he imagines a world in which server-side React came first and then extracted client functionality:
Really like this pattern, it’s a new location of the curve of “how much rendering do you give the client”. In the described architecture, JSX-as-JSON provides versatility once you’ve already shipped all the behavior to the client (a bunch of React components in a static JS that can be cached, the React Native example really demonstrated this well)
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
1 hours ago [-]
h14h 4 hours ago [-]
Excellent read! This is the first time I feel like I finally have a good handle on the "what" & "why" of RSCs.
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically?
2. If you do, what do you do if the server request fails?
3. If you don't, what do you do instead? Intermediate loading state?
4. What happens if some of your friends submit likes the same time you do?
5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking?
6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
Really appreciate the quality you put into expressing these things. It was nice just to see a well laid-out justification of how trying to tie a frontend to a backend can get messy quickly. I'm definitely going to remember the "ungrounded abstraction" as a useful concept here.
skydhash 10 hours ago [-]
Everything old is new again, and I'm not even that old to know that you can return HTML fragments from AJAX call. But this is worse from any architectural point view. Why?
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
danabramov 9 hours ago [-]
It feels like you haven't read the article and commented on the title.
>The old way was to return HTML fragments and add them to the DOM.
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
skydhash 9 hours ago [-]
From the article:
Replacing innerHTML wasn’t working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it’s possible to run an XHP-style “tags render other tags” paradigm directly on the client computer without losing state between the re-renders.
HTML is still a document format, and while there's a lot of features added to browsers over the year, we still have this as the core of any web page. It's always a given that state don't survive renders. In desktop software, the process is alive while the UI is shown, so that's great for having state, but web pages started as documents, and the API reflects that. So saying that it's an issue, it's the same as saying a fork is not great for cutting.
React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
danabramov 8 hours ago [-]
>I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
It blends the previous "JSON-building" into components.
skydhash 8 hours ago [-]
I'm pointing out that this particular pattern (Server Components) is engendering more complexity than necessary.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
danabramov 1 hours ago [-]
>you shouldn't use ViewModels as that will ties your backend API to the client.
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
whalesalad 9 hours ago [-]
to be fair this post is enormous. if i were to try and print it on 8.5x11 it comes out to 71 pages
danabramov 9 hours ago [-]
I mean sure but not commenting is always an option. I don't really understand the impulse to argue with a position not expressed in the text.
phpnode 9 hours ago [-]
it happens because people really want to participate in the conversation, and that participation is more important to them than making a meaningful point.
pier25 9 hours ago [-]
Maybe add a TLDR section?
danabramov 8 hours ago [-]
I don't think it would do justice to the article. If I could write a good tldr, I wouldn't need to write a long article in the first place. I don't think it's important to optimize the article for a Hacker News discussion.
That said, I did include recaps of the three major sections at their end:
Look, it's your article Dan, but it would be in your best interest to provide a tldr with the general points. It would help so that people don't misjudge your article (this has already happened). It could maybe make the article more interesting to people that initially discarded reading something so long too. And providing some kind of initial framework might help following along the article too for those that are actually reading it.
yanndinendal 7 hours ago [-]
The 3 tl;dr he just linked seem fine.
pier25 6 hours ago [-]
the fact that he needed to link to those in a HN comment proves my point...
swyx 2 hours ago [-]
it really doesn't. stop trying to dumb him down for your personal tastes. he's much better at this than the rest of us
24 minutes ago [-]
pixl97 9 hours ago [-]
Yet because of that the issue they were concerned about was shown to the thread readers without having to read 75 pages of text.
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
aylmao 9 hours ago [-]
> The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs.
RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:
element = {
// This tag allows us to uniquely identify this as a React Element
$$typeof: REACT_ELEMENT_TYPE,
// Built-in properties that belong on the element
type,
key,
ref,
props,
};
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.
> The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client.
I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.
In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.
skydhash 8 hours ago [-]
> where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML.
> all these systems that did HTML injection over AJAX were tightly coupled
That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.
rapnie 9 hours ago [-]
> Everything old is new again
An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.
Knockout was a huge leap in developer experience at the time. It's worth noting that Ryan Carniato, the creator of SolidJS, was a huge fan of Knockout. It's a major influence of SolidJS.
kilroy123 2 hours ago [-]
I was a big fan of knockoutjs back in the day! An app I built with it is still in use today.
jonathanhefner 2 hours ago [-]
RSC is indeed very cool. It also serves as a superior serialization format compared to JSON. For example, it can roundtrip basic types such as `Date` and `Map` with no extra effort.
One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.
I would love to see something like it integrated into React proper.
AstroBen 50 minutes ago [-]
This all seems to be relatively simple concepts for an experienced programmer to understand, but it's being communicated in a very complex way due to the React world of JSX and Components
What if we just talked about it only in terms of simple data structures and function composition?
wallrat 8 hours ago [-]
Very well written (as expected) argument for RSC. It's interesting to see the parallels with Inertia.js.
(a bit sad to see all the commenters that clearly haven't read the article though)
android521 8 hours ago [-]
Very well written. It is rare to see these kinds of high quality articles these days.
danabramov 8 hours ago [-]
Thanks!
jauntywundrkind 9 hours ago [-]
Dan Abramov (author) also recently wrote a related post, React for Two Computers.
I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.
Maybe OP could clear up
- Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work?
- prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here?
- "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?
--
Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.
tbeseda 6 hours ago [-]
A second angle from the same team?
Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.
chacham15 9 hours ago [-]
The main thing that confuses me is that this seems to be PHP implemented in React...and talks about how to render the first page without a waterfall and all that makes sense, but the main issue with PHP was that reactivity was much harder. I didnt see / I dont understand how this deals with that.
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
danabramov 9 hours ago [-]
>When you have a post with a like button and the user presses the like button, how do the like button props update?
Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.
Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.
Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.
In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.
altbdoor 10 hours ago [-]
IMO this feels like Preact "render to string" with Express, though I might be oversimplifying things, and granted it wouldn't have all the niceties that React offers.
Feels like HTMX, feels like we've come full circle.
danabramov 9 hours ago [-]
In my checklist (https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...), that would satisfy only (2), (3) if it supports async/await in components, and (4). It would not satisfy (1) or (5) because then you'd have to hydrate the components on the client, which you wouldn't be able to do with Preact if they had server-only logic.
altbdoor 2 hours ago [-]
Thanks for the reply Dan. That was a great write up, if I might add.
And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.
low_tech_punk 10 hours ago [-]
The X in JSX stands for HTMX.
danabramov 9 hours ago [-]
Yes
recursivedoubts 9 hours ago [-]
unfathomably based
scop 8 hours ago [-]
I can't help but read this in a baritone blustering-with-spittle transatlantic voice.
spellboots 10 hours ago [-]
This feels a lot like https://inertiajs.com/ which I've really been enjoying using recently
chrisvenum 10 hours ago [-]
I am a huge fan of Inertia.
I always felt limited by Blade but drained by the complexity of SPAs.
Inertia makes using React/Vue feel as simple as old-school Laravel app.
Long live the monolith.
danabramov 9 hours ago [-]
Yeah, there is quite a bit of overlap!
nop_slide 9 hours ago [-]
Just use Django/HTMX, Rails/Hotwire, or Laravel/Livewire
pier25 9 hours ago [-]
Phoenix/Liveviews
Fresh/Partials
Astro/HTMX with Partials
jacobobryant 9 hours ago [-]
The framework checklist[1] makes me think of Fulcro: https://fulcro.fulcrologic.com/. To a first approximation you could think of it like defining a GraphQL query alongside each of your UI components. When you load data for one component (e.g. a top-level page component), it combines its own query with the queries from its children UI components.
Yes, another case of old school web dev making a comeback. “HTML over the wire” is basically server-rendered templates (php, erb, ejs, jinja), sent asynchronously as structured data and interpreted by React to render the component tree.
What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.
Right? Right. I had similar thoughts (API that's the parent of the view? You mean a controller?), and quit very early into the post. Didn't realize it was Dan Abramov, or I might've at least skimmed the 70% and 99% marks, but there's no going back now.
Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?
danabramov 9 hours ago [-]
>What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns.
Agree there's echoes of "old" in "new" but there are also distinct new things too :)
9 hours ago [-]
yawaramin 9 hours ago [-]
I skimmed over this and imho it would be better to cut like 30% of the exposition and split it up into a series of articles tackling each style separately. Just my 2c.
danabramov 8 hours ago [-]
I'm hoping someone will do something like that. I try to write with the audience of writers in mind.
cstew913 9 hours ago [-]
It reminds me of when I sent HTML back from my Java Servlets.
It's exciting to see server side rendering come back around.
alejalapeno 10 hours ago [-]
I've represented JSX/the component hierarchy as JSON for CMS composition of React components. If you think of props as CMS inputs and children as nesting components then all the CMS/backend has to do is return the JSON representation and the frontend only needs to loop over it with React.createElement().
no_wizard 9 hours ago [-]
I believe there is a project (not sure if it’s active) called JSX2 that treated this as exact problem as a first class concern. It was pretty fast too. Emulated the React API for the time quite well. This was 4-5 years ago at least
gherkinnn 10 hours ago [-]
There is a part of my brain that is intrigued by React Server Components. I kinda get it.
And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.
When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?
danabramov 9 hours ago [-]
I think the rollout is a bit messy (especially because it wasn't introduced as a new thing but kind of replaced an already highly used but different thing). There are pros and cons to that kind of rollout. The tooling is also yet to mature. And we're still figuring out how to educate people on it.
That said, I also think the basic concepts or RSC itself (not "rendering modes" which are a Next thing) are very simple and "up there" with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.
We don't have to go crazy. Let's just meet at MVC and call it a day, deal?
wild_egg 10 hours ago [-]
Deja vu with this blog. Another overengineered abstraction recreating things that already exist.
Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.
Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
danabramov 9 hours ago [-]
>Another overengineered abstraction recreating things that already exist.
>Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its "textbook" interpretation that nobody refers to except in these arguments.
Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
> nobody refers to except in these arguments.
Be the change, maybe? People use REST like this because people write articles like this which uses REST this way.
danabramov 8 hours ago [-]
>Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
I wasn't trying to strawman it--I was genuinely trying to show the historical progression. The snark was intended for the likely HN commenter who'd say this without reading, but the rest of the exploration is sincere. I tried to do it justice but lmk if I missed the mark.
I think I've sufficiently motivated why that response isn't HTML originally; however, it can be turned into HTML which is also mentioned in the article.
timw4mail 9 hours ago [-]
The hypermedia constraint is crazy magic itself. It's not like HATEOAS is fewer steps on the application and server side.
aylmao 9 hours ago [-]
We already have a way one way to render things on the browser, everyone. Wrap it up, there's definitely no more to explore here.
And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?
/s
Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.
whalesalad 10 hours ago [-]
I feel like this is the kind of post I would write if I took 2-3x the standard dose of Adderall.
10 minutes ago [-]
yawaramin 9 hours ago [-]
It's the standard dose of Abramov.
danabramov 9 hours ago [-]
This is what happens when I don't write for a few years
emmanueloga_ 8 hours ago [-]
Hey, thanks for sharing your thoughts! I appreciate you putting this out there.
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
emmanueloga_ 7 hours ago [-]
From my perspective, the article seems primarily focused on promoting React Server Components, so you could mention that at the very top. If that’s not the case, then a clearer outline of the article’s objectives would help. In technical writing, it’s generally better to make your argument explicit rather than leave it open to reader interpretation or including a "twist" at the end.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
I appreciate the suggestions but that’s just not how I like to write. There’s plenty of people who do so you might find their writing more enjoyable. I’m hoping some of them will pick something useful in my writing too, which would help it reach a wider audience.
Vercel fixes this for a fee: https://vercel.com/docs/skew-protection
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
- https://www.youtube.com/watch?v=zMf_xeGPn6s
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
>The old way was to return HTML fragments and add them to the DOM.
Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
The change-up happens at this point in the article: https://overreacted.io/jsx-over-the-wire/#viewmodels-revisit...
If you're interested in the "final" code, it's here: https://overreacted.io/jsx-over-the-wire/#final-code-slightl....
It blends the previous "JSON-building" into components.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
It doesn’t because you can do this as a layer in front of the backend, as argued here: https://overreacted.io/jsx-over-the-wire/#backend-for-fronte...
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
That said, I did include recaps of the three major sections at their end:
- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...
- https://overreacted.io/jsx-over-the-wire/#recap-components-a...
- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.[1] https://github.com/facebook/react/blob/e71d4205aed6c41b88e36...
I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.
In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.
> all these systems that did HTML injection over AJAX were tightly coupled
That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.
An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.
https://knockoutjs.com/
Btw, I wouldn't hop back, but better hop forward, like with Datastar that was on HN the other day: https://news.ycombinator.com/item?id=43655914
One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.
(Self-promotion) I prototyped an API for consuming such components in an idiomatic way: https://github.com/jonathanhefner/next-remote-components. You can see a demo: https://next-remote-components.vercel.app/.
To prove the idea is viable beyond Next.js, I also ported it to the Waku framework (https://github.com/jonathanhefner/twofold-remote-components) and the Twofold framework (https://github.com/jonathanhefner/twofold-remote-components).
I would love to see something like it integrated into React proper.
What if we just talked about it only in terms of simple data structures and function composition?
(a bit sad to see all the commenters that clearly haven't read the article though)
https://overreacted.io/react-for-two-computers/ https://news.ycombinator.com/item?id=43631004 (66 points, 6 days ago, 54 comments)
I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.
Maybe OP could clear up - Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work? - prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here? - "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?
--
Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.
Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.
Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.
Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.
In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.
Feels like HTMX, feels like we've come full circle.
And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.
Fresh/Partials
Astro/HTMX with Partials
[1] https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...
What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.
[1] https://hotwired.dev
Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?
Right, that's why it's in the post: https://overreacted.io/jsx-over-the-wire/#async-xhp
Likewise with CGI: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi
Agree there's echoes of "old" in "new" but there are also distinct new things too :)
It's exciting to see server side rendering come back around.
And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.
When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?
That said, I also think the basic concepts or RSC itself (not "rendering modes" which are a Next thing) are very simple and "up there" with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.
Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.
Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
This section is for you: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi
>Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its "textbook" interpretation that nobody refers to except in these arguments.
Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
> nobody refers to except in these arguments.
Be the change, maybe? People use REST like this because people write articles like this which uses REST this way.
I wasn't trying to strawman it--I was genuinely trying to show the historical progression. The snark was intended for the likely HN commenter who'd say this without reading, but the rest of the exploration is sincere. I tried to do it justice but lmk if I missed the mark.
>Be the change, maybe?
That's what I'm trying to do :-) This article is an argument for hypermedia as the API. See the shape of response here: https://overreacted.io/jsx-over-the-wire/#the-data-always-fl...
I think I've sufficiently motivated why that response isn't HTML originally; however, it can be turned into HTML which is also mentioned in the article.
And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?
/s
Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...
- https://overreacted.io/jsx-over-the-wire/#recap-components-a...
- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
--
1: https://analytic-storytelling.com/scqa-what-is-it-how-does-i...