Based on big-react,I am going to implement React v18 core features from scratch using WASM and Rust.
Code Repository:https://github.com/ParadeTo/big-react-wasm
The tag related to this article:v17
One of the biggest selling points of React v18 is the Concurrent Mode. Next, we will try to implement it by focusing on the Time Slicing feature. If you are not familiar with time slicing, you can first read this article.
The part with the most changes is still the work_loop.rs
file in the reconciler library. Let's review the previous process:
schedule_update_on_fiber -> ensure_root_is_scheduled -> perform_sync_work_on_root -> work_loop -> commit_root
Now it needs to be changed to this:
schedule_update_on_fiber -> ensure_root_is_scheduled -> perform_sync_work_on_root -> render_root -> work_loop_sync -> commit_root
| ^ | ^
| ---> perform_concurrent_work_on_root ----------| |---> work_loop_concurrent --|
This means that a Concurrent Mode branch has been added, along with the render_root
, making the Render and Commit processes more intuitive.
In the perform_concurrent_work_on_root
function, we need to use the previously implemented scheduler
for scheduling:
let scheduler_priority = lanes_to_scheduler_priority(cur_priority.clone());
let closure = Closure::wrap(Box::new(move |did_timeout_js_value: JsValue| {
let did_timeout = did_timeout_js_value.as_bool().unwrap();
perform_concurrent_work_on_root(root_cloned.clone(), did_timeout)
}) as Box<dyn Fn(JsValue) -> JsValue>);
let function = closure.as_ref().unchecked_ref::<Function>().clone();
closure.forget();
new_callback_node = Some(unstable_schedule_callback_no_delay(
scheduler_priority,
function,
))
In perform_concurrent_work_on_root
, we need to determine whether the Render work is complete based on the return status at the end of the Render phase.
If the return status is ROOT_INCOMPLETE
, it means that the work is not complete, i.e., the time slice has run out, and the Render work is temporarily stopped. At this point, we need to return a function again:
let exit_status = render_root(root.clone(), lanes.clone(), should_time_slice);
if exit_status == ROOT_INCOMPLETE {
let root_cloned = root.clone();
let closure = Closure::wrap(Box::new(move |did_timeout_js_value: JsValue| {
let did_timeout = did_timeout_js_value.as_bool().unwrap();
perform_concurrent_work_on_root(root_cloned.clone(), did_timeout)
}) as Box<dyn Fn(JsValue) -> JsValue>);
let function = closure.as_ref().unchecked_ref::<Function>().clone();
closure.forget();
return function.into();
}
Because the scheduler
has this characteristic, for example, the following example:
import Scheduler from 'react/packages/scheduler'
function func2(didTimeout) {
if (!didTimeout) console.log(2)
}
function func1() {
console.log(1)
return func2
}
const task = Scheduler.unstable_scheduleCallback(1, func1)
If func1
ends and returns func2
, the two functions will share the expiration time of the task
.
What does it mean? For example, if the expiration time of the task
is 3 seconds, and func1
takes 2 seconds to execute, then when it executes func2
, the task
has not yet expired, and didTimeout
is false
.
If func1
takes 4 seconds to execute, then when it executes func2
, the task
has expired, and didTimeout
is true
.
Otherwise, if the return status is ROOT_COMPLETED
, it means that the Render process is completely finished, and the Commit process can proceed, which is the same as before.
Next, let's look at render_root
, which has added a new parameter should_time_slice
. If true
, it calls the work_loop_concurrent
method; otherwise, it calls the work_loop_sync
method:
loop {
match if should_time_slice {
work_loop_concurrent()
} else {
work_loop_sync()
} {
Ok(_) => {
break;
}
Err(e) => unsafe {
log!("work_loop error {:?}", e);
WORK_IN_PROGRESS = None
},
};
}
The difference between work_loop_concurrent
and work_loop_sync
is that work_loop_concurrent
adds the constraint of unstable_should_yield_to_host
, which checks if the time slice has run out:
fn work_loop_concurrent() -> Result<(), JsValue> {
unsafe {
while WORK_IN_PROGRESS.is_some() && !unstable_should_yield_to_host() {
log!("work_loop_concurrent");
perform_unit_of_work(WORK_IN_PROGRESS.clone().unwrap())?;
}
}
Ok(())
}
When exiting the loop, if should_time_slice
is true
and WORK_IN_PROGRESS
is not empty, it means that the Render phase is not yet complete, and render_root
returns ROOT_INCOMPLETE
:
unsafe {
EXECUTION_CONTEXT = prev_execution_context;
WORK_IN_PROGRESS_ROOT_RENDER_LANE = Lane::NoLane;
if should_time_slice && WORK_IN_PROGRESS.is_some() {
return ROOT_INCOMPLETE;
}
...
}
In this way, the entire process is connected.
When should Concurrent Mode be used? It depends on the priority of the update. Generally, less urgent updates can use Concurrent Mode.
In big-react, it is currently defined as follows:
const eventTypeToEventPriority = (eventType: string) => {
switch (eventType) {
case 'click':
case 'keydown':
case 'keyup':
return SyncLane
case 'scroll':
return InputContinuousLane
// TODO more event types
default:
return DefaultLane
}
}
Before calling the event callback function, the global variable currentPriorityLevel
in the scheduler
is set to the corresponding value:
// react-dom
const triggerEventFlow = (paths: EventCallback[], se: SyntheticEvent) => {
for (let i = 0; i < paths.length; i++) {
const callback = paths[i]
runWithPriority(eventTypeToEventPriority(se.type), () => {
callback.call(null, se)
})
if (se.__stopPropagation) {
break
}
}
}
// scheduler
function unstable_runWithPriority(priorityLevel, eventHandler) {
...
var previousPriorityLevel = currentPriorityLevel
currentPriorityLevel = priorityLevel
try {
return eventHandler()
} finally {
currentPriorityLevel = previousPriorityLevel
}
}
However, there seems to be a small problem here. eventTypeToEventPriority
returns a Lane, which needs to be converted to the Priority in the scheduler
. So I made the following modification:
fn event_type_to_event_priority(event_type: &str) -> Priority {
let lane = match event_type {
"click" | "keydown" | "keyup" => Lane::SyncLane,
"scroll" => Lane::InputContinuousLane,
_ => Lane::DefaultLane,
};
lanes_to_scheduler_priority(lane)
}
But this only updates the Priority in the scheduler
. We also need to update the Lane in the reconciler
. How is this implemented?
The answer lies in fiber_hooks
. The second value returned by useState
is a function. When it is called, the following method is executed:
fn dispatch_set_state(
fiber: Rc<RefCell<FiberNode>>,
update_queue: Rc<RefCell<UpdateQueue>>,
action: &JsValue,
) {
let lane = request_update_lane();
let update = create_update(action.clone(), lane.clone());
enqueue_update(update_queue.clone(), update);
unsafe {
schedule_update_on_fiber(fiber.clone(), lane);
}
}
There is a request_update_lane
, which obtains the corresponding Lane based on the Priority in the scheduler
:
pub fn request_update_lane() -> Lane {
let current_scheduler_priority_level = unstable_get_current_priority_level();
let update_lane = scheduler_priority_to_lane(current_scheduler_priority_level);
update_lane
}
In this way, when we trigger an event, we can change both the Priority in the scheduler
and the Lane of the current update to the corresponding values.
Everything is ready. Let's test it. For convenience, I temporarily lowered the priority of click
:
fn event_type_to_event_priority(event_type: &str) -> Priority {
let lane = match event_type {
"click" | "keydown" | "keyup" => Lane::InputContinuousLane,
"scroll" => Lane::InputContinuousLane,
_ => Lane::DefaultLane,
};
lanes_to_scheduler_priority(lane)
}
Then use the following example:
import {useState} from 'react'
function App() {
const [num, updateNum] = useState(0)
const len = 100
return (
<ul
onClick={(e) => {
updateNum((num: number) => num + 1)
}}>
{Array(len)
.fill(1)
.map((_, i) => {
return <Child i={`${i} ${num}`} />
})}
</ul>
)
}
function Child({i}) {
return <p>i am child {i}</p>
}
export default App
After running, you can get the following result:
The left part is the initial render without time slicing, and the right part is the update after clicking. You can see many small tasks, proving that our time slicing feature has been successfully implemented.
You can check out the updated code here.
Please kindly give me a star!